var/home/core/zuul-output/0000755000175000017500000000000015140343333014524 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015140355421015471 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000323476415140355333020273 0ustar corecoreځikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9GfB %?K,Eڤ펯_ˎ6Ϸ7+%f?ᕷox[o8W5A% oo/q3m^]/o?8.7oW}ʋghewx/mX,ojŻ ^Tb3b#׳:}=p7뼝ca㑔`e0I1Q!&ѱ[/o^{W-{t3_U|6 x)K#/5ΌR"ggóisR)N %emOQ/Ϋ[oa0vs68/Jʢ ܚʂ9ss3+aô٥J}{37FEbп3 FKX1QRQlrTvb)E,s)Wɀ;$#LcdHMeBmFR5]!PI6f٘"y/(":[#;`1}+7 s'ϨF&%8'# $9b"r>B)GF%\bi/ Ff/Bp 4YH~BŊ6EZ|^߸3%L[EC 7gg/碓@e=Vn)h\\lwCzDiQJxTsL] ,=M`nͷ~Vܯ5n|X&pNz7l9HGAr Mme)M,O!Xa~YB ɻ!@J$ty#&i 5ܘ=ЂK]IIɻ]rwbXh)g''H_`!GKF5/O]Zڢ>:O񨡺ePӋ&56zGnL!?lJJYq=Wo/"IyQ4\:y|6h6dQX0>HTG5QOuxMe 1׶/5άRIo>a~W;D=;y|AAY'"葋_d$Ə{(he NSfX1982TH#D֪v3l"<, { Tms'oI&'Adp]{1DL^5"Ϧޙ`F}W5XDV7V5EE9esYYfiMOV i/ f>3VQ 7,oTW⇊AqO:rƭĘ DuZ^ To3dEN/} fI+?|Uz5SUZa{P,97óI,Q{eNFV+(hʺb ״ʻʞX6ýcsT z`q 0C?41- _n^ylSO2|'W'BOTLl-9Ja [$3BV2DC4l!TO C*Mrii1f5 JA *#jv߿Imy%u LOL8c3ilLJ!Ip,2(( *%KGj   %*e5-wFp"a~fzqu6tY,d,`!qIv꜒"T[1!I!NwL}\|}.b3oXR\(L _nJB/_xY.# ſԸv}9U}'/o uSH<:˷tGLS0l/LKcQ.os2% t)Eh~2p cL1%'4-1a_`[Zz㧦|k˭c ĚOρ_} Ewt3th?tvͪ{~;J0= |JUԍ;Iw}/9nh7l%>'ct Հ}a>-:(QxPyA Z UcÖgڌ:8cΗ|U1,-N9 dI [@3YN%:ò6PT:”QVay 77Đriju)X9 f΁U ~5batx|ELU:T'T[G*ݧ ؽZK̡O6rLmȰ (T$ n#b@hpj:˾ojs)M/8`$:) X+ҧSaۥzw}^P1J%+P:Dsƫ%z; +g 0հc0E) 3jƯ?e|miȄ{g6R/wD_tՄ.F+HP'AE; J j"b~\b$BrW XWz<%fpG"m%6PGEH^*JL֗J)oEv[Ң߃x[䚒}0B fU3U;,ד1t7lJ#wՆ;I|p"+I4ˬZcն a.1wXhxDI:;.^m9W_c.4z+ϟMn?!ԫ5H&=JkܓhkB\LQ"<LxeLo4l_m24^3.{oɼʪ~75/nQ?s d|pxu\uw?=QR -Mݞίk@Pc n1æ*m$=4Dbs+J \EƄզ}@۶(ޓmmU̸nG pHKZ{{Qo}i¿Xc\]e1e,5`te.5Hhao<[50wMUF􀍠PV?Yg"ź)\3mf|ܔMUiU|Ym! #'ukMmQ9Blm]TO1ba.XW x6ܠ9[v35H;-]Um4mMrW-k#~fؤϋu_j*^Wj^qM `-Pk.@%=X#|ۡb1lKcj$׋bKv[~"N jS4HOkeF3LPyi︅iWk! cAnxu6<7cp?WN $?X3l(?  'Z! ,Z.maO_Bk/m~ޖ(<qRfR"Au\PmLZ"twpuJ` mvf+T!6Ѓjw1ncuwo':o gSPC=]U҅yY9 &K<-na'Xk,P4+`Þ/lX/bjFO.= w ?>ȑ3n߿z,t s5Z/ Clo-` z?a~b mzkC zFȏ>1k*Dls6vP9hS  ehC.3 @6ijvUuBY hBnb[ Fr#D7ćlA!:X lYE>#0JvʈɌ|\u,'Y˲.,;oOwoj-25Hݻ7 li0bSlbw=IsxhRbd+I]Y]JP}@.供SЃ??w w@KvKts[TSa /ZaDžPAEư07>~w3n:U/.P珀Yaٳ5Ʈ]խ4 ~fh.8C>n@T%W?%TbzK-6cb:XeGL`'žeVVޖ~;BLv[n|viPjbMeO?!hEfޮ])4 ?KN1o<]0Bg9lldXuT ʑ!Iu2ʌnB5*<^I^~G;Ja߄bHȌsK+D"̽E/"Icƀsu0,gy(&TI{ U܋N5 l͖h"褁lm *#n/Q!m b0X3i)\IN˭% Y&cKoG w 9pM^WϋQf7s#bd+SDL ,FZ<1Kx&C!{P|Ռr,* ] O;*X]Eg,5,ouZm8pnglVj!p2֬uT[QyB402|2d5K: `Bcz|Rxxl3{c` 1nhJzQHv?hbºܞz=73qSO0}Dc D]ͺjgw07'㤸z YJ\Hb9Ɖ„2Hi{(2HFE?*w*hy4ޙM^٫wF(p]EwQzr*! 5F XrO7E[!gJ^.a&HߣaaQÝ$_vyz4}0!yܒ栒޹a% Ŋ X!cJ!A\ ?E\R1 q/rJjd A4y4c+bQ̘TT!kw/nb͵FcRG0xeO sw5TV12R7<OG5cjShGg/5TbW > ]~Wޠ9dNiee$V[\[Qp-&u~a+3~;xUFFW>'ǣC~방u)т48ZdH;j a]`bGԹ#qiP(yڤ~dO@wA[Vz/$NW\F?H4kX6)F*1*(eJAaݡ krqB}q^fn 8y7P  GRޠkQn>eqQntq"Occ°NRjg#qSn02DŔw:ؽ 5l)Fa/TTmCԤ{"9b{ywSXE*m#3U ùRIvޏrJ`k|wJKH:O*OKy`( ݢe*{ ua ȻݔhvOkU~OǠI/aǕ-JMX _.6KsjA Qsmd  O#F.Uf28ZAgy>y,d$C?v01q5e.Um>]RLa&r?+@6k&#l)I5_> ` D s5npo}/ؙq #a2V?X~.4O/'|/_|&q̑0dd4>vk 60D _o~[Sw3ckpkpLNa ^j 5*<&}kˢmqvۗj=<Tr=[ a^؃ È(<^=xZb [_tܡ&yЋ{ Sym^?̑sU~' Ԓ f\itu)b>5X -$sޕ6ql?N/e1N2i)i_TA|S2G4miBȨHM(2hys|F 94 DNlϒòκ-q|xC ,gKDzHR%t+E/wd#礱ºȄWEz o\JξB.wLKZ39(M +(PWՇfR6#ю3Ȋt ݪbh]MTw䀩S]'qf&)-_G;"1qz퇛0,#yiq$ՁɄ)KٮޓJ|̖D?:3mhW=rOf'/wѹ8BS8]`;=?,ڼ"ϴq*(A7? /W= #^ub"6q f+=^OI@߱^F[n4A#bYѤwd)J^Z{*ǥzw73LuaVad=$6)iI gC~.1%YmҪ+2gSt!8iIۛ*JgE7LGoş\bC}O i ycK1YhO6 /g:KT sPv6l+uN|!"VS^΄t*3b\N7dYܞLcn3rnNd8"is"1- ޑܧd[]~:'#;N(NknfV('I rcj2J1G<5 Nj̒Qh]ꍾZBn&Un' CyUM0nCj.&Oڣg\q0^Ϻ%4i" ZZG>Xr'XKc$2iσֹH<6N8HSg>uMik{Fm(W F@@{W+ߑ?X2hS4-=^YgpUHެbZ!y!ul@ڼ63" ۩:6=TZõ$E,ϓRV|G&$rr;J TtIHFE=RȬ]P pLm|?$%>Eü%mWO[>Xmw,*9.[G n >X8Ī;xW%dT:`ٓ~:VD)O>UD;;MY,2ڨi"R"*R2s@AK/u5,b#u>cY^*xkJ7C~pۊ ~;ɰ@ՙ.rT?m0:;}d8ۈ ݨW>.[Vhi̒;̥_9$W!p.zu~9x۾vC;kN?WƟ+fx3SuKQqxST Ζ2%?T74a{N8;lr`$pZds=3jwlL Eڲ t|*n8[#yN SrA GYb8ZIaʼn8 #fg3i`F#5N 3q_M]j 8E!@1vցP7!|+R@;HspSI]ڻCZUcg5pDcIϹ,oN-_XI,3\j ]ٟ5~' SuipA!C厐$&k7dmhz/#"݃,YqCL$ڲ`"MUbeT>Xuv~4Le͢ }UVM)[A`b}mcE]LCEg=2ȴcmZ?E*-8nhױ1xR2ϫCya` A y!?h!9yL%VLU2gr26A!4vbSG ]ꧧWp/ &ee *w$-`J\ ptǣC^p#_`{ К8EW>*(D{ٛ,[fnY𱹞M=6&$<,"lX-Ǐ_whaE 98 (oѢ/Р΅ 7ցl6618ł_1/=fu).s¯?.S[{'g=Ҥ):d8h\y6]t1T7IUV:;.1& ,5΀j:<< +Y?58In'bXIǣO{&V\DŽ0,9f O_"[l:h¢8wݓ19\:f6:+ .3}=uvKc ٹeS<>ij(o'ciS<{1$E[nP b?8E'xv[K+E{,Qƙ1*dcs_Z'407|qBOgYU|U--sG8`u! qGYܷw;ȌCPc_|(RaIBKb+{P.T! =ĦiTob d<>SHr][KqWs7ѝBYǭ~RR"p9dFg|K- obY_vM 4>/]e/dy,8!xŋ5 R<^mYo 3c9(F?h04SRm+0^PTi-"] O('@BKD6 {NmʐzRj.aQcb^CZ-uvpr CѐٱlGNzIveca=%1Qi F>wTLHUGӃ\sA֎Xpljlv ^tSȻ \cPwίwX"{>9V0ټ_`#U8VdTtD_GU9V ұ{q:ObUi7s )B ۊZlzIA4S#x,T3ѱ ԶJ=rs>Nb: Q6ˌ߉J%.Dl2ȱ%ܱ&6XƟ6qg(USok+Po$lwvmi8W_VT18V =| ub6QWCnY'"*aN08wuSEAVخ m3 o\` sHc# fqT .,ŀU|⦍߶/*~48âF,#[:y_YIpʼn)dk!J'Z5=r&; (y*b*O_ULT.ÔD[%s1,jЅ@k0Ցu֯dtKl$Y5O*GUڇvI`b0ο0~oI`b#FOf_$0!i rS/wvҍ%Eb/Ec|U9F-)L)ŘF`U:VK jeFrԋ7EDYpԽ.D\dNyj荊EEg]bÔF˩ք%EGƶ*NX)Hc(<|q@Oޯr^3>Uf1w;mCja:-1_k٘%VbZ˙#G6 `q+MPU~l!.?I_Pĝ"] rT [eTr؟˰ ]\ h! v˱>5S1px fnk}sRmA>d2UAkؖvlX܇Bz1U_#Xӫ+al H d\k/I,k,ρ|`zR/$@8VU^rcG"E7\qtS:ڝUyy >Vc11*?xYa8U`Jw/AcL~|;yj8TR#s"Q.ϊ/Yrx+u6*27fǪC%+A~*Zآ'ѭnۡ|< a1s\ T5҃FZh?EV"sd!@БU ^p%pO3|B5=2怕nwRqR9~ i±za+HFNi>. EWz:V^&YEs5Ȭ N *7{!fRБBSۘ† Er/IGU}APQT]|XN X]FbKjKdO U6[3TTX)|*H'2U0:VunBl  `5/@ա06VNO8VGON@KgjyK?Wq1egI+ I.*F~L!Gf"LD&U 6tGd#fR*c ^tSLjnKS9 Ȼ \ >lr&}+̼d"I va,Jm_u)d靕َ| Vw85F3Liƙb<;dM-})C?Fw*IJ_3UG'+¨[9| >80\+ xJpΕ`p~mg˗%F Rg(6=/r+%a>w Ohght uЍaRs ^d6GXAf?V_mW puȇ S:tŴvŀU#-*mZ5k5r)_x*8ͼx@(k:_TX%[paRu~}#Ѥr %A%`;MxB[CzR怕#H% }8@*AM.SEhd,rKrʇ)br\+! s1CtӒNc_:F*`Nv;ogQFa2V%ZniE|nZ&-I,t*ώlo Lhnٓ'Xm R ˍ-~ά}hs\5TT%~am.>!LcoJrKmqvez܅E9t6FZXgsreHhlٷ+ [}r:̓?W~e6>0E8`Jq-(ed;W¨:Ä&]䒿e;0:|$Ȃ1L-%;Ƅ{dɱL;V[bp>!n&աIJX1$9;[?- й vRCxKVV+#lj@_RL;IQ8ŢΌXD@Z< (1ZRÜ:OUM/vư{'jYXE4S/8 7: `/ +G\ U>]B2/n2=8) B gJ3bcKo̹ʇ\B~Is 2sO/I!}xV&\b<9$4Nve^آ]$LGF@LjKٕyzH 31Հm-XıUXF|\A-2) ' RG6h?āUŔyj[j_ӂ~ яA弆^bDyzǖQ8`jXbsK?l58,?YP5䜭ve9YFznTEf3Ja\,@2,?WYؾNr<V` =V[oB5!Z\ļǪЎr8@*ucѡv\[|s L-+y{5K@dzp`r"mũɸHNd"yc Pu>x2;W`_VR<aӗ&D<=h-Rר|/r _ǖھcߖ]G@Ն;UQG1 '3Jە Q88ASUȿ!:WѥLf21;d9OU᧯MR3V:<}xXh//T+coY5Ȧ4/m0NE(G2[+G~H'5ipӘ͏O +Px SPp.,?Uv|$+miEJ6qcb;MvSWx")Yv촬$x'a?OUb$m2'{jdzl$ObF4_ ?V3Y\sLj h`Q #JqwW،cmPJsl=Z -0*VuQV͝\q{^~ϣJe6idSx !_ ?Z3@$G)Xbi@FkVX&(x7?$XMa]/Ǭ b_<n2Vs~tD3`)"'E!J3R虊m+`'8\܌WOfg;qv=V D h~ephx,ɥhZ`?6^j`XB^RYpTpܟ[7|^K*҄*PC]u1xu[+0탣f6l'f/uZM`nK}g0YT5P%ϊj檶j~p?R.^g%0^V-MP)!Gu >A3],w{2~h7j'c/2?#kc4dn>#;#{ l|ZȭsO7"9dm{О>$,='{}HNM\j885!25)zLUA`w"ale*ҾTZJUl+6)/t~gө֥_j_ޥaM UsM׶K5f4(Q+ F`q4TyM v @ k#uD}#k@Iȓۆ;cTJ~}}*ĸB E~W@نsJhT.sG0ʺS'X]5Xr5^>ƢP|ޠF,Qڣ̂2j:grŴ̂лJ 6`j;t3J^dEϴt0@}|3BMU7 ӲfjjL 't== 45F-lg>Ϩ.1hx2QߝG3[nuYY ɴ.X,y=LR܈Zкj*0{DQ)oE<"0%"pP?˨` V1)YuؤZIW\@eM\u1~G"pNBLU/glgA9XyUΚQ-1uD:Y1I`,&QW-0ed VR';ZXu5g)'9koYO[eV_0>|X$[$ȯ Jn2hd)I ÿ'O.Hȣ/٘Tw7-W '푃"{kߓoA3d8qr%n=2`H半Y\f=LlMکh>ߓv5J xG C{R83/f1oq*)6W*냩KѼJ<ɲNYQ3`2ooX4xp]hrαJu"֯HyѢ(M:DN>vuʒ(8i<{7~}.*J;3?+*jt_0Ȣz{z.+߿J$RF2Z,CF$0ɽ7H@/[>t Կa_ƭ}]Y,W,:ϸ8( 4aě"KbdTF* @4kCzF?mm=8B k0rx8:!.j.H+:!}|KVQY]pwg*f&1\&WD]D)OMB@:syt-T!znޚ̧d CM@\ctvu_4&-6⚭_]> B-9ڰ6yj# B,T4*c`3wVk 4υ4?|." ~?|M>Gqnڵ]"!ĸmp[G3@|bq~-OB;iL!j-Ѱ z3qBwC(0uH4"IVfifok\ШxjE7ORdUI`$,B~-k!!hj_iFx #m@|KB(q!Xэ'bN0 lt1Zd)C^ߌ訦31N6{귗EacHs|NQd4\ iBI*FYNT(4PALYJěGȂxngKy['RT ~#Dl2~FbsgMs, `i[{{ o /C%غhp|D*-ʗa:  \!Z:͕A\lLe`]5fi2&VxJl[,qT ?'2pazkI/K< 0ʬ9J#$ o a4)H=ól|Dj7۳#!ؓ AݗD9 `D\tM`PRȭ*6d1 lj,+ ({(8DEct@rUs(<{G\c">}3K`tucO0 -uc%T sO+u!`iu+SI,f%R\NǴZ2u26Ǘ]/D4+V+'itayH -dQՀVZ/:5-y6veq5t=:1 pJ ):]x' LYfu)1ІV, 9~ϧ3IV^FXQ8esHqHS1ǭs*Z[0H,K2 yD@kAĩnG_jnlkchJwdj2p[Xwde4 <μ0d3}0S=]7ٚ88j-d=F!M9ֈ'8]qf2o[T I%T"\Q#em97T[=U4]_)YwFѢAʦmY"(,un"vݤ<4db7-unkDxm-91f,۪f5AϗD,3܅mO.:~$uUM^|)Qܮz´OjJFw+˖l#,JYGMdD;=YL!RT eg.ooTĺ2<źeeAs '—A[eIlz\AICVKr7 1s%cUGϚ+̰-gŭ 3XܞP|<+hՇr;!\g db ùK#)gH-jBѷ< CH}FB}Wd /P!4=魵pWx:ֽi?!p]49D057f OF+bcGT}hȐ3+ &8rA]t:2՝ Y |L[8?'0u1ARڪ[#4.aIL; CYS$ p!{nֈd!BX4(4*i說R"\>8^8h鬌T4" D&7"a)*˒KD@Sd*dfU9܅9gOec .A )!t Ż8F߾;k$F1VgHFCu݈OzHi|a5v["b-mq6_i߭~~/^( ~g8υTtM5 Ehagjxƙ_!{!JpE6i6mޤ=`[$'ڒO!)Y F[ےH>3ό"E~G'kE:x4@Ѱծ[[ƛ=uAY{i}i8sc g%Жg7 Wgu U` $&\&$#|kJ2?Zzּ7!@0(o~l1CQ{4GE"t$,U>xM0K"ʆݞqM6xzԣ@g3:[);]i7y'~?Q?&h0xkO a_ҹ#bVnwhpZ^#.- P ;ڤ,SUG@SSe`"KGN>‹NZ;DnWUFLM`8v0x#b{ 5,]&vwi<'s^TNd#IP(( c$@8C^ǶP>tc@,ao?Ap΋D\K_^;m"bX.9 H,t͝ӳ,|a@@l#\( hڟ+r_4Ζ[u(D4D⼨/ аZ"+FkPXG ~DHK`q-{[4Y_Y-9fW68LYޮ1kժ!yYUѼg^b׽Y;{=/n/W8i pz6 ̎mA:=g~x n "ۇp` BoZ(30 :ZhS!*(m7rB.x :17¤pC/0Nc3?Ia"3G &huSAT?^ݓuiX yv"*&Eg<^Ä op,=2b_ɡ'd<ـDa0PO5S`&!|V!P'\8x6'书+e* 2fԴ=4@0/؍ #$6$-*hټaoi]vݾcfnT-f!M\SRHQ 4쾵 rSe6}票Yf$*F:4(Iq#KiP$j(oay70{l߲R0nF6Ȟ692]'~d9a9١wЋGX qr=CVO27>w6|7ea2|wdA#,p xdZ0D.?n7YJt" { v`%D~5 hrV"]+a`[oO:H ,)& ^j;4ao6܀z[Rϯ nDx8:Ú?d`,G`]e_{78lDG>,-VVf+9EB>\7&܀ilyG-QC4:  D(`ᤞ=p"VY<0J.4ڐPZ0M8_~/aoUtUPtwA"*)}D'*}lҝ~Tڧ+wڵVv]ViZAP{UP{A&HAee ʞ&({lA|A )(APgUPgAu&HAuWuw]PiAPoUPoA&HAWw]Pi R.lci,-TsdD1UEuX#˻ X8S <9IWbNpO&!dc n8&ɵ8 ˤ$n+7g>7P60rXd_ϹpFIiS_9ẙL4[|KgQo'57!6 WBѵ WtjM8XӉ8x{bȣ!>T =,\ Lb<):Zj DUTAݢ^ñq?5U0 !v˪~,~J¥.=L}<';a2cUfpXE᡻uskx"JG0gX3sv3.ب|]$XnMUBE(*PoFB4P" 34&a:3`Gz]^R V@<j .> #AO1 ܿ^(Uzag%S4F0{UY] luzw; +UT*4.ߦF6zhr=0g~ 㚰U=}LYس:TUMbW=ROh(OfR-0eҵM)')d&b_U4pPZ7%'fe'8d\2 W{U/QlsqP\Hk'@# ӞqMG<(C_/K9R 'Y_Y"ZF"xMf,qtxuPpaxVs04f'6 ){!?`[6be'R A jEcAHCԌm#h߰j5H|Q)[*& HB)]Ech0 F ( *լҁkל-,A]*JdrdW=X1сMUlktr2t'sKR1DVQN/a>p,n%e XrgN`%qMʇ& o9ى勄^+u$>K$=\"(wt>D{  {0tT(_J7 OflJ+*{_ך }Uߪ/Q/Nm7z\A?ŋ%=5Oa>*P1^K-]w?~Owi|*ϑV|ϘeѶxgYyB('|(眪Qr]x󕑦$Sh9l궡> U7cd#oh 4 7 %5*Җ%JF@ءWsx/Є "{: O3kͷt3[TLX ʟ;#kwdz?>_v;vP|ay/+"SЉm9r>'I2G͹7ĒE8yŞw^<0gIqdFQ&YY>o#PfԿ2&w];bYl ?Nq#fq^F/_ ,NW[Q{''*}a  w.Bo;B)A>S= *|bpy=Fu=nlfBa  ==?szgI&e55acq4 fZ/5a0J1El9mqJ˯UB5dQw*U#8+Yn#IRVžY41XI @),l%ʑ?Yzv&7xێoMn}̆7̂m2[O%Vw[|OPR?%@6'úLS_D#z;;W|L[ E3{9ԗ߶o*w)yq=$G?4^OATwpz걓oJOIA)?KaN~[XUU_FjN̊mA?E{]:+ɉ\3EGro8> I֨tkޙ{ SKHT!XaaI cۨueFɴ[8'& '7#;?iI:SZy,u"+xIc-;h0ys<zh@{CX.0-Ka{/E9u@|}HbgU=omO3"ĚpYpF콌cIW8eyYܢG)3 ެ?CnXkɮtu0, !Awؽ}Hb}>{uXU=PYhj:bM'b$]_s$8zPⓋ )#d$jMG! `$, B Hi i.x+X҂3pVA96} 8k2w5pcC1 j$ʣ\߬)V278Y:岍*Hb] AF;,"&UvNvbQ npFڐBҕxqkG(/^jjji58pRe.pH^'؀NA7f+Lh]f$O/f"1ϭcKJFPe\:]  FҟD08exA+x55Ϭh%FHbeq aQ)Su=nDᜏѥĽ.3\gT$Sт9ncD$7+VJ; rlBuRP3sATQB q 1b^oMtPEe.hXR Ohߘí7Oeyv[=+{KnpeR$fiD| !kH!ַ+D 4c>g*_UZ5#]*Q5Ռ4-IdN6gǞ=kA1] ܼ)X4$E!NwR%Lw4(yc^+Mkd #JI6;˫CVZ %C̚r3Þ5{s`n9qx"yΦA)pZ-#2 lԢ*w rwKH[NA%%Yp4B 1y3Z4Wi Db0e6'v#i"Ϗ#7YFB (v7cleĈa{=ÚNF%6p49#`ˊ)a86ǀc$Q[ꪅLLH{Ǯ`>Ŝu%}q1bȃ)/<= ,<xo3l$ت_,0!>zltS\ھxg^uYRpc%rpф |ي-ۤeKI:{ZWY,6 a 1^r#QQ% oLНҘ!V!:p_k䦏yUy?9.$xn4jo`UhLo2)&LYR15(e{8z>by!#o;EMi5u7LԢ'v3~qWH\oկT'v]9$ySxCv>g`1sԩ 3b.4s$_T ]8!v-krPu:nUh:`:V BS e.xބ%#03w  _pD _ &,ws.8FǮf?Ip%!TS赐;G҇VRv7,DIعi KڧȼnJ'ɝA\c$ExgA݆ 5(EPEDsLu =׋u[RҲ"s8IWFE}^Z RwA1`4+/E[TH 8 xT5|X^o_V'G}f_W_ 7̬͛q<蒘UME@9JjɑWϓlo_3@K 'qk*ӷş~1>yZd8^shh.^hqk>ȖEz?ݼ'f,?]l١o8os<`[ Yq{^'R-$?žů{tyZV]2M.`z#"f>UA\đ[$IMU2L>!z7 !Hb[ ̓⊽nK*AEbh.3 4-_F-f&ɥIkn0B ;6ŌgѤbQdV&L;lò]\⛻' w7˛M8 ( >rl<Uco.F{9f^H`gR5 5t/l+a#Kg-W <87@Wq̜=F >ƾG!0~01\΋"5kIpcq,De XI2d3Oቑt.D\Qt2˜|Goyk7u8 "42dGRA[⿟0yOoi]; t/*0Ǥj{屑8060 bw :uH#h^%"u 'tZwvJ Z"^ iGץ" Fuaʽ>ܓբfeyuwߟt<3՞4rb\ǜe\JY~R$dnY7DL5&^55"J6 &Qa]"a>{Turȹ &_91+k5< /]}=P+o,5Zc+߷VZULIc-G:IW^- .x`jq<#)p4rMp]qRHhN<ˌ)X=zvyFKN5*(]˧ٟ/ӎ2?8]q*Z' kMp#4at-<{'Sx >;Ocò.8)Ke)=]dTL7g%yT"*yZ9Vٱ q6(ˠ42ɅD=}=^Hr vVܹדj+n/$ ָ[^LJrG[բ41-Jq|v$va~ŰteHŞb7q{d!UՊ{}FmoC!fR/3<cBұ5b4U$`ʍU|`6"a+#>=$ Fsw^tnm;ᰨrQ;(iL,<9ھzZ3odRZB2VT  .gŢH ]{_\ZԝҐz7v.yK̔4X0QTd:ƒ e*}.c$IdgIxHJ89%Ex8tsM1IՎ_8H:MØ7+VO$<7~poiU{BykkȔL"`$jxISfnKnNЫB,!\A # kPfD1lU~KXhܙWC+BV2+Q-!u}8*U[@8N2gaX}^â\‰Kq kc(uL^m!T g)jH8D× 0( 'Q&i[~f[,DEeZ7S Ҭ1ެ[CMx(~# 2].3)3f / VomYks^wmH_ajfv:TuLΕ͔ 7z(%~ (َ$S)$hD@onꆆOhz`fyxBQꥄz?ubQ=&EPx2z.KLP+3od !\PKAVRڦpIdbթ JW)8U.?$[ %pE`[:4:lk~=|_76^l"-|M/k]5 䋋g(d/gp/׹D̹@W"\}?C|Orc?/+sy/6RX-FYpe/%Yfo%)>'C;u/>KwCCD#D8ò-h u-|rT6EՎb2PUK]5U5FPx8 px|W0& ؒ1yW^L+ar'G.UMoruMpwn P4do/6 Yym;wU:{(ifdzy1I)_EE7d\M&{zF/<IO Mp8 d|RV P/|r ؐtl/ :A?I&vPu9QXv7x#hsʄ>.yUdh~G3Ww`w=S)5F0tB7Ҥ:rgt9 Kl}堇NEg=Qį]'c n6VX(K=Y5vE^Yw޸ę)шc4C20 63;Zîu\ܨDJ))#.\ r ¼̀ ^m2!ImJ-B; po2>?3ŀWK̜c\F2BR_˯+vLFfru9`Ƨ<=, 53,\D-/w% KIG^8V+Rվ)۽r 7qcXfWܩs~T_TxGtmӮ"qHw/_v^9hX3A5Wy6Y^Ty[1?n&ЩnѸ@ `͂e wJfLnĜN^t$Q 3BtG-X3A|dғ/71q@9EҀ뚼sNQs:U BZ%b.jJc8&5dEB>GQm)j7HH-X˰gA!&V!"KmP]X G4P^vcMlxBSZ 6 儽Z 4Mwۚjj&zIFoZaXA`ѢU;P'!Y r4&yS(z؇T2;bI;G90X_^t܁6V{ÃcmHN'g)c.;0 ^%Evči9 RiHŏ ])2BЌufYawqW..㐓tH^yFr,CJQn&Nj~y,nq,Vy2ԵPN8ׄ%-¼ֶ_%ԉTaBfe:zO_QkU&@(uce_@Cnvyf2V SfAUfsXUʝ-Y.v  [zE~vX`jD$fs/E@s⹠>;|n (~X24a|0yD뷑)()Q9V9eoTwiUw AWmBQsl >\Ƣ|",Y2"ypv!)pfx˷li,.䁣Aό}>Si%HGJ02mќyIJJc[fX.lva4PE8چ\wo)4v&,Ql9vLZ4M=v\c qm >W. .-}$@'?NJa8/h ȟGS=o_rpwOaScB÷H0Oq:4Amm4-kS >1}ډZxg>7#ptZvT> ;G I.l4ދP1z},M} VЭ鳛i -Ue^b`sz>%cE!w^[xBJNxEMo~>y'GTu@mOX2wRro{NE< ayGU=?KHia*`$Jf9m#ak9iuє*.,*[P}4'Y>ps6(/ NCu@m"5uT-GWpDьU7uT!lj0KL:aԢTQP)"TRd֌5Ҍ/i^S3 jR[3V D,hLpad2`mi I`)gQD92Ք5aE4a 4/oX5 X k2<%qOO,ņBF 50BC $[[3VH3V@3t8b콮[zIp.fp%nd*Zkަ? `֌5Ҍ4\;׬گy}0 fjCP W뾹.KLF.#@W̡s>34V jopÏ~b908)hEd:@>?g7$/QO/~_Ug?ޕvˏ7>c?Ba׳w? Q,]UM5rssfr=WM_Ύ?U"(GV&ꮣ@E >A5@?G]8F3"B) s yF(ozG%C϶[?}х;.I@g&G܀qs|ClΗb`M󽏲o$`>X}3qŞ7HZ#o``}ѣHgf=$(Pl/i>9L:MukV_?D$Gm[_ s. i'g|rjh1}Rqļ;^Ν2RYFlT ԃ9$J[䗣}0b 0'vR+?D)_=&i 8_W?H1^bb`s.k4@Ӌ"./B4$$/Q]N J3yz1Vc Px.E]BO4afAxo% (q:\_\, [jF#~PǑv!0Q[9D8+xݕNe\y 4 |z8|3- ߆3KsXЦ%w}p@Z5TMADеta# 9ߦm+Lul:7 K.P>mmpbQ>#oOlo Xrw5@F-bTFGTh쭙"&0Vmț )¬^ qY YdM}jwخFz#q "LZqJf^SJ~. sV[ l}Td n&2MHX<n) M+=-[/1IYG5liMSv4!Mâ$Xv> g_H&Y9uqEmn`Ӿ/O'eT*/V-;#h$<"4HsB "Rʌ`ňgrLK;! 䌚>U#s/@Rc.1oP"5S#(kINqOոSCJ8~+3BG)y NPNrɐ֨TaɈթV`jDJ+ުlEkbhu"$Y"Lh{e5|GC?HiRc>8e0 fVΧISK"u&Jfg P0^Ii !PIT3 z@Q zpJ W DҦ)ZDd &H'.9 T{Q5CQ!#;$8QSIU ?"+;CQ N/\t I2g38[尥iIh2Ll'tTatlbPӸ}c4k@UƬib MB3mRԇ4 o&>PG Ia9MYSCp"Ҹm:NaD'܋p0HaP ϑZY =БM@IA\^sL3/aIS:6; .:h*x<ʩF1j3' ?`,)O|pI o'a>Rj%z՗]Yl,&M`:`ۄSMh:d$B3-Dt@i9e a)\" IRN+f(%c8BX xR {mM0C;R01̚(ɸ0$qU@MzǩQ%oHbWOB}+ҥL | PFdkXI]gMej$t. X W L2-mwI J q$Jh$;J pQZ@[!Y1+SSd4Rt)~MeIEO)**4pkdVQs9vl*h8sZ0xG HJRޜ+ZL O 1D#iXLE4ImcB91I' "[RâDT_¶,͢Ɋbb< BL8w}l2-#.”uAnx55a II'z+-qEoeDKZvkԃZB6hƻ ZhF[c: ,Hn%6#6x(; vWD{!oq]BJB2.s $%I"=RcĘ$R2>V#K%֜+Ri.%ʠKHMwkcQiJf;MyfSFssfU`p*I@hh<]Yi8mv;R%9"'Xb#4N_ŔfUJL7:&m(*! g4gpM֓DK<4 IPfH`HRj됨R[:O49IT%W56 3LTȖ|7u`*y`6#Wp0 GcNUd2ݡ'C&_`ou])*a?ﮭZ~PS)lթ`8̞ڈcKQC0łVX},JYy.z<2)ZHB,!%$Ϧ׳z.I D0_w*Xww/ț޻"!ݍ EH%s~=i2iAH˵<IЗ7yFٯ }GjoyDzSߵkhJ0sۚzfChyn7%vR嶖OimWlhNMRCED)9/|MMܡQVi>zUj;J3Ddg6e;J˪3KV"aҹ2ߪ(SgGbRiNכKBhnԁ Cr\wtւpy9Zݸndu(ou !)o-@Tң aujքt eʫ*Ua%m8-i$^S#X.([l]޺0RLp{mmHQszUᒵ*RauKE*fjP&%!C.n=m-C٪eRc&! 1m Vo cx<i߼ޜr SFwLxf`Zvք۝TA˰b9)G f_'q ¤2 R6DS~߉!>w k.4fWWfr7B6ϋD^_Mf5Œvʔ())ݙp$UG]b4d+ TKT<@7ꈗ0=#Jg3D=Qۈ Rtk+n$ U\9tūV \=sH0{f|DsIO<Ct=(ϳO!X;{~s W iȍO8g;fP3_Mڔ'ӏ|?ǂF+jAIYz7,9ʒ1Q#^("DHc $ o+n!!ɀL1P!}dzKӸ}EKaժm%E>zphwY1 dRI}.tz6q}oP(C*UB<\#Ƙڅ]]kꕗ^Y">GbXɹ~7/(E?j[y18B7?q $afqf ޒx@zS;1tY1x4йډM`ee o2iheڅflݜ`Irnz\n$iYAh7Rֻ9֨PQ|wnΙiy;,{jT1CX4pMwѰ2G(dEԳ&%BN{!4e~n_Я_d'S-ʨ$; oOav4Fk17WBOup O1|&~o`odyO  k )a i 0WSbz7[A}ӨqwʹsjLmf߬As }.޹`$β[Sk4i IFU)I[U|Fd9AxXgAv^P)wwbubz^8W0 ߒ]\^KĂ@~e߿/O?'E`z{j|3Gr|peb@@ihf  Gw/\%Tjs~1M7܅_ -_gQxXj$.6~tb4ki%r",aZ+`.44g.t0̕AsGdeq^Au zk_(12#>.F*%5>ljhFÚݨ1v$wc$\g}>H?Okp?xU{/Z>U.Od/dv[z,L'BNH"jӚT.u1Ct3+bkzv2ma =c \ktP+.(., j_=)~?0rZZD;}(dn 7!0Q4O<?S8Ԣ_t<6ꭚ;aCL40}!Jh]@S4<_]>D_i~GE=Ź 화kԛ+RdPj,|8pA7XB,pI+?Uṿŗ_8Wq}ߣ;s> 8. qb U cXF#lpJ/N 娳̦J8,'&Qɲ>9W/p7BKٿ {m^P\_x Sl8ؚ.}ildz羉c/(sqf.hv6 ၏6}6>ƟG VqUe'kYSaߢVe>Opwa[e^œsEϡز] !J}39FYNlաyܗb6㌸2Nf $Ask;Sެ*T~)+5ԇ(Y4 #K%[@iܚaا}x)ʡ6rؕ wct -/aCk9|}"ʣZHXjNuBXkTq╉8SKɉ|Tg]uQ9Gκc;f>s o G<ʾ.]ɟT͉}JFelK+ӵTD6S5Ep~>>,^rT1ƨYmo,?4!Eyv V bhI,/%0" B'5<_a5++7 66l3(=vu/ȥ$ՙmBJuTRb6T"s}xW]ڲsk*gVf_:D(Q4سrxW[Yw֏ e[n>?]]J92e)ω'6ifX>^lՙf͆ٿ>n$ɍJ&,Dئ-)4سrxW5jqK7J̐E$$IB ,^*1r@nYḴ̃*Ăfg$QkiYS" 0*2w!,0/6?OX~.Wd}1_'6r 2dkQ18Ϯ&O욦~@Sw7ntD^~\/%5f^2і'o#s%{!fbHzbuڙm\SkMLF:3rgiYR[突' +k͚:c7pH [@砩&WTql5 n5!t3\/Bje-f֯sJ9fu3η7vDƼcuY)5; .{%s`z"2ޕf{5$GzWp)ٖz.x.Sr啓0&Jrk$Ss;cP$4|Vt*F1?NꥌQM2~v45Qm"*z"qhW=7hv`TsU4F*NzQXWE\$z~=Z^/zJE$jnx|ύ,4jl*gZ[`׋hj ^Ju]TS4&4\-w+Zb]ѿzhr'I R(g㘣!<'Œ-EpR]T/"BUb@lAPEZZ7 kEdKez61x\Fh7 !0,Iv$=:@Ņ{I\I|WV 8n 13P^yFAC_ $cgX,3ڭvͺUB3)L QJҗѰֹg|Q {)00FW/Au͑Ef!-pIb 1V@!i\4Yg&  JmSm!4c{t>Vwz>@s a<ܶp;ⰗPJ*H.9D0VJg!yH=Unrì'Ÿ'q8LԐ⍙1n9iQ6@6@weyzg4Bi5q[t1PyP / iq>6AfiR)}8 w4ʮ<ծ<(, y >M!Ғ4:6MK]=OˏQ x72E4jc¼|:P,S(y8O3F,]ze_%12FcmfftiHSxqXgoB5ya[RO PT3( g?'O֎lE(z|:DG7&{dd3kѿ*ɵNR d)r"uic FL;c{c0ޑʊcc)q"5͈՞*ta+lƁwv|.0ZCw(7e\v&F rjw#CoȆZ x iN-%wQ"# xjOh{=mxW*ԆRne n+ulcہ '4G$ O.-9AD }}ޡ>h1H*ZllYq2JxRK±梃:}ڡXtP7}g7IC06^J9;/rBކ~1b&{wxim&MĆyJI"**bԓ(T PTT;׃tVˏ#fff3 LBg`oE / e:syTxWwZx9[?-.WkY >5,=$9cܙ4(YQe9 #9UW+b=ST،w<5$O,1Nr/^>dD+C)"g:8p$ѐԸ7m:~0]m﫛zt,FV'6#Z0=FYSvDBg.mZTL~Beٲ&뇏n*#M`u|= w0oW>ib?PPn|!N/a)syX *|3n6|aӗ ς~n}oMh80rBŻtSqu(lͥ;Pp3Y 4ynn&UDo']7Z@sI׋y ؔ\E˗ ^u37?l$U_{1m_5@p=+J@X8[p@s3 9sjL8,e8.^Ȕ >hz/=nk$[x[ܔH|/H|ǃ[]׋^h!?Q)맩 &y}J3淟ԑ B`vZOկ㏯k&a7ِv-FٛzuRTAW-T|k/kWs7n?]d0Uo' sL= EzUqezn65)o[o`typ'l99}5}Aa Qbv|Qao~]5hSOl#>=;5+0[l2WYU,59MS"d[&9=sQ+!pTP]Y˭ݹ<2Z|B n}[^*3C2+>vI({a`&['IZήE6BZ/T\{2)1M-h*Y CrQb=JB2ł'[N>Z\ipt!iNEd"4!u^%uNN(N-^ڢƘK紵9ՂqONЉ6lr47aڭ6q_kmpq Xũxz*`9IWNKHT GT +*J]Ƙ~=iLOz|WI[{L^#cvMxBbF37e$,W9H`(uUTij /Q*Kd 6\w؛MI4sFŧQն#UV!BXR\ҥ}1\^Fy龜VJ$Q J3͆6T ]Z=iIZF:͚W8v8nԼQ\F`oYZWsI롮m{IUV`v".En tw B+.cøD{-CõZ9Ҏ̻yRl*>\F~C6?U*z,}øȲi}[ o>mE;moqڡVSFb'c9$0M oC`bǖe{P9QB&?OB^QN[I`*m| ʼn?'~}`@??M< V֓׿MA?'q9U4&~ j}&m%dE1?m @Zj>Dfߤi|𮾛77@&Z.mM)FaJQmSb=7){=Fz--#nAy{Yཌྷ_.d+t\Q/ܱjKj]V-5ldqT҃OekeFm\  Q6.:at/2¤uӺT4IR$jFdXJ)D`"[u"[xOGYR+8i0V+QvKWj )4K<F278 RRb.3&$(J> ̅ac5?UC`Am5|0hx+=8v=[ccͯq#:Q 0̈9C s\&J k7=Gh:6=M^;e"2_y.!4b։<Lp}E= J@aA}6Rh^uEVz7}E cgtwQǖ|B:r'?}3JJicњu_1;Z 7 >bjcg2r9]a!+XO;S`DPd7: L: %Ҍ65D(WlG4:9J'JC1W5[iG(fdS;ju\w EZtYņ#Ӎ(S11C6^a& ,KJsw:k5J/ۖ]m"y˃$:0b,/]IX'UB˶y~CDe@VHN bhp+$|c:Ӡ 2{ J/m*,l|Fi4܁==n[t#ʬun*vyp eI`_;Ÿc0#A3;#,˗S %_O $%?eT,tV}1F}A XRIM`qޗO|x_END@N @<@PX5v=c[t GQ%rK>sVm4i63: /׫M>FG#y,N/F c-;*ТcAѤ$P惈QUźG!<+SyC`P=A<_VX~6\͵]Tij` iOעcyygM+b\wRAFo"df3*e+D;Vjb%(׸C fTTN5+IPZN'@ [e}5.L0vFq;ۦѫ>D_tMٴvN/vl9*STҮce&zƤN5q=֌|Qn2ƍD~QI/3y[#tc{ fZ=BC5j0: Uxx\JNwh):W7.:fP32[TfAK LFssim}1{lF*Zgfl Ӱ%Al+Usox1íU-*s|fϋYZ]]Ae7?Ds놺1(jo,>8f{ rl& RTZ\<װ0(w}[ct%<0| N' #e˰`#4:/8QJ.?)T{1Gp%z+ik`jl.aQ=etBv)SGȨ% Q)]&Y,2v:1(ސ!Hz̞KX/멱#WQXFt6]#"ǭE^wڌb]YQ Ĭ>5ˆո I9w")A6bHYP TxOP9ƺGmtՌ񰮢1g㻾0nQӫ]Qӭ>jwbQk#j #o* oQL0:d1Hw v @. >KN& >uvy!F1Q Ѐ;vLa㭭{|򑚭o7?9^B}@UGύb %]o/[r-(o@tj%'zr]&YOI.68Y2"P}hj󗇷E]kEǘ=ZyTpvV~"{E|suh.+_>NoͪzM4K{7st(ۋwuWPn%ۨ+o\} ʴΊ~f%(xWm$:Xp) iѱ_ ?gFz0 %Yy{pDu1kJ%EbXZ/=ьr/9I5)8H-?~Oe=_.OBiYrbxM֥V9^n[`u3Ip@u} ީuxuYs !6_V̔e|) )e 6U,W*њQUv82/e,Li\AzX}8s>&M؊[V@J8%Ia;?O$h+We3-uE:Y-f|:41[K9I\ƹﴜeG^=fj:g̯/Fr׏u}6-nM=}.%آw 6:w>yW{oŠ|bc?u^2ZKYPUpp( ɃI\t2 rVYvzˉܩoB/]=)ojP0 78=QfYoP)3/~8v${Q8~%+:d;JV*H/Q 7w.EG^p 5ve2N3ij +LSgEV4bywC% 17HVh"dHIC{96A೪hYq[^ag$#X'2?L_rը*3Z*!1Y?\Η|,;U2S 8|,ҊX+JΨ/eHmܼ 8_1( nWh0V:,]EThaNT)4Kbhw. s޿ם5Ug b2%XtX=03nV.ts\è]-#<^걁*H0`l]O+wƦ[YDcɹہs8  8yKx ft6xn)5ҥio ݙ>V:iӐ.ry a_bHۛ景-:D́zN 7#]K~ /5#xO'r WLWRe|Z>Wדb1L6U *?lگ'lU=9 o-0tU8 $]?"ay?n xOrxQ-:2s~Ri08PŐҥ5Dq%=T]SeEӔZ324qpz^Q[Yo]@Ep" T1G Kv䅮kfo!jӢ#38#frTf]×6uG |G8VJQd/\B__<ЭZ)Eo34OY6{NU+KB@K?9sv#38ZggF 3 Nv8٠8\q^*/3glVj1۬"34-'__"Yy?}:x^8 <MD [jm tU(%#LЙ~V&༕2=0-ӑ*\zޏ$; 7ȷ"jȢKWdjgU9`3w!vPJ+%! dR$RdN+~WӑQv aܬ79M7E 7aR&FGwZozuo:;0J8.l$p5=bSVT aVaO)?:P'-:2s,.KS(ǵEGfpڗu&e͵EGfpny~_WpHLgmX"[%藁aJ%m*0QEfhu7L_<5׍Q9=3 F~z =#QVJpQ Zt/epFҫ)]Ůzg1d O?} M1 LuLDPbKlay-"N,L³&N_Rk_|c 3Dx 9Éə֖pl³_Kca!)r/gx yr͓0J-%l98.;9vvn뾹.-cD"B9&,ͷS\yNϠYߡ̟no}zp\xrLJ\Oky 'Eށ5+sΗ@{Kfd^anN)m2/GgΫ;NnDž' ixhj摠>&4ԯ?;'zx^= ]8..UcI󉽚R* śṮ:'ZGa{E  1Ub2jJFpi5אL Oy=(qd-lp~/(tLVbj/jCC8| -7\ڜnu5 Oa)|Xς<{?pAq-r^G٭NR?:gXq! 2DX+U9COQ }?CI5U@ 5UŭX K䨳}Z2Kj*}$B(E_0!E@ՠU&J?N+/~v_[ʉWwsiPfAr!  ½vgnMQ+QAm3NfҩdX UM܌r3.sgjGȓ?^~''#!HRO E/(xXy zg#>?O _5]ju=1~s9|?) Yҿm{vw(FؤZ]d(1Hf8.?(8eYZV D1"ąz5*eu_"2P528Z@F9N"EkMiu?h:Z Ʒj:KΨ߹[9z^9x!\Lw^zAw `$c/(<2`kBC9Z1G9ovB˕L *doߨS~R;G4_A5ʏV _Vp4[>,g68[Hi4 Pk- |;O:<\܅ʉd<{<5@ C5$$j jI VBW\|%CZF!\D*лD +4r#.^+F]51q`Q1 GQHIN9%tQ`(GK2hc!m=Ps-{+  hI0$ } AR}X ea60J`m4Yp#{)TLq<){9fw?ֱ|VsւFK+ftUd7,Ar;ѯvFnSK#)Xr&q$O[Ϳ7xX +g4Ԯ%$U(_2 d ?wA|el&~E x Z+a Q~_0$5)IPo^,UL]- ՝Na 4e*\<1*0js5*̱s[C7F7hQCSdW¼le!+/ݕLZݼ||KBQgצa7Y? }8}럻Yx;gZަ_4)An=Su>&*炉*'ѯZ(ߜObAGyvPqINIE6ђ$S!2iL2d_K,~2tsɱ@^l[?ywZ9&u#|o؏H.:6M"9" H,#:A$ W?r8 FtD;ulon*{;H$l?n6xH/.Pߕ~ W5߹ J40'(\tIg`V*I Ɋ:e2:*o1؋P"ނ޺ {9֞é.⨢EL*fiʼтkgCJ*ݺkí YiJC;7l0(7dx_-d /xG :&;M“KGv2K_ vPr&J<Uraދ^.eAUY/_5JKBi)a{TG%mb W*88f3f/xwfxdD7NsR*΍B[ThUẕ`s1m u.Sq'jg BRDܑ 7}LfngsVG4[|u]´CBt SP4(9;mmW.j ݻI@uRײƼBJźu{\jYDiQ7 :pΥjbΙb6,J1^$jS&CW]Bpy-R+B#d9ˏsphTJ>`FɇJzVa$ łłH6.OSQBih+fte Mn3˜LK*s2y2 +'\SD1\~'u{~,iRHt}cPnY9gJwbR<_L",VBA)L ϭHYbxpL!T ;?  "HX"uC4)HL׮)do,9P#y_Q.d|wJA(ٹG| +7.[e;zJPM{ #05b &<^B$PCS O =%2~S |i(&Y'+5Hf8CȈ1 P$'P#იILg-! W]PsWX|K.#Cr_j_ǡΒR%8J׍Z}T*CеCd?C * JUe_{?KaR ȣ$갗;.uL8Y#Ofp ~{#ЕCMN 2REl,2Kq.!ICWl,dμoi,; Up ]ߏU hIeB k1KµP_C )4z 2:D}p*p`)L1:^-~3a8ԥŦ}bT[ Ů,HݝP(Gaze:Cxf2?^.SX!5B#H8ʥ؛R+c$f[ |}E)*UU=6v,NR0JOTEb+0I(䑆J:'i;,$r+3݆YYrDoUGsA{/6 rAIIp9a%4Kq,(4TAp4^K5O뙭jx;6c)O qJ垧H+h ?)?AcF(l}Z-L,ga4zɘ-4 HtHV~tt 2xG8s{},^/ĵZ+T_xHN9F-"(ϼdxNv>OW˗G{Gc"EŚgEeV>ǿz{Wf/p};/i^*7 IaBzmYCjVe"Qkr/oFx̺ׄS&@FW&nUpqgo~<>)p^TYWsvr7߿y־yy</N LGWWptZ'ͨl:tj*-JZbCܺrBLanʭ{ѭ^,īS нݻ ¿_OƶO&-GnF[}7q~~""_ZyQ#qfr~~;yeZ,K,d>b ;<]?ɶ5oY.tYv| LYw˥_x;=Ww" ^Fnkn9nV}wڏ!>Ίm^/?)c`>qf_βy^s:lp'J˪x<}LrkkY(R$HΫ(=ݹhd#=?zBwzķ/ym/VhpSZ묍\nϖNEm#^>'wL-僾PA .* 5Goԃa"dY8ۧɳYa:)'QռdXyt?ތ!g-m{*(?>_PţeQnb5v݀R?W⟚I8USpUU:!3IֽGi/FXsn0J5īяP?wmI_]na%~ L$^C×CqWTuUMwKp.0K,q}~K$ VSW~qOI2ƱN*s 4?fM0A5.hBjƎi^`i+mzp/.m۶O,l_ x +8zd|G>q(o>2Ej.avݾ[O7S2Ie(,lYa~[:d>[:K d̞2)_fP{K1Ka޷MLKyDLZtc3alk~S=H6*M4{W-&+6x_Ej:}ӷg:ݢ[;+Ue~$7љz;^T˗y -/d #A;e GCPdS(dj!Po_|ӛ=-{A~ jUYjĉUHSR0ޡs)Nnݤx󘐅sV͇mc8hg2%k d~9nNt*;ڡeK}4+c;p!Qؽ2Cc, 0cg*9x!6Kk6:v3sZmKa7gN&z1*X^DBjC&ssήt^?G4 4j.{63Ts*aӓ`#HHkE l4gU1 Z7bz,<6B10L$08:ю9zfNXF|8p 9$^AKfɼ2| >ZMGRJ!xdkS#✰S;zAݨF2~La37H~ 5g705Q0cS` Zg+=DndtQWw< 3;$/>oD Y\1GQ!bz'_ >>Ww~8|}AYJ Zk-F"J|~C,q*n4xpIJ42!k^֨=@4^ח;yyւjsxޙSйIEwS)+B1z )p֤ch:th:z% }Q{)(6g6PvZ\vccNF#* @Tu}ӟ42~z.+x"0#<\4`C@Їii _{/;U*{ٴ@wہt e=, 379^sb;9-(4?mPXrV^VR9^9zfNu[yݥ@l]dR2=jm9~;9Yc JrKn G^&=3%孜eS8}8uj.'Aq ;nVPsnØ={-̭X<*W (bV `l#Y{9ɢ{LC8ke0) Z$8cGY8-PZCdwP~.Ǝ z)`E7Y&_Ǧd%r8r$p 3hs~Y*nX7r5<s"p0o4B6"J#/`^cd6CuOxSS ̌(ZCS̃:y7VkS j -E-2@ Xrhc螹!q{BdY+L:K催CS+Z%'˛?)tp=W_\vw*9YcS{r. !ݟ1SVOxE>w]nfSgWi"B pj`R%![[j99s J*jě$Qďὶ:5>\eOת}֮X=}Ӏ}WwЗp]r~+b-8h.Dːc"I0UhDxAd-4D{i=]DV!pkHؘT|uF=I%<Dv,"[X7$&hH,CzYxg5]nE7ԨjBP~.=CrNIss])_y\_LDxa`Wl+ 7Wlm5YhwnC)]t˛>#  'A5yN'{>NzYQfJ=>XX8DL!N5T6Hъ vJw ,[o~'9>Hnc" *[`sWĶ"1W9}J56bjb>/a25؏F*[ oh*vܧSKG$R/S#"Sat d "ߍG Þk]ϰ^ɗi, br1j9Pw~%SVDPkctI-bk&Z"q`IEI#vTeRIipG, &j~}{ FaaS I?c'f;X1e~W4ƿM9a@ ѓ.N$ZKZ?/SuwNN|{T'ZRiR^fqL H2x\,[ڏ(i?våCSܪynX^tz^g+ҡ 69)$4o,j_37,y9hoja% ⤺K.4 z,BWfD/~MۿPrAqG =|?#ܩKE&\Z֋ ђl]>Ev.Y׼/S]Ul NǴiY[.ŽWn, Q]ͮmE)gNm }7 jϲ DdvFfZ.ҤeݓI+xXp̂eh^ЇMők jc'cba KC|bR'-uSr <|ʻr;935pL 1RmcmRr4=&# |"kOiMq]NrejKE@ICR+c5P#xHi`oQa`o rX%<*MhC> 9Hg8 G^Pn7kؾJ&0\r}hxw[,gl䇃^X;Ҫ;d\f 1~膡hղ4>7FnvN ;/ߍsx%-ƭz95s{Ր QHC8fm4ƒ! r8GkȊPIZ<3{@)j1)}+ v1E5˫On0Wd)WX\ vx4}5Dn8@ *LN0*L_L7[$;tſ4g70L |;_?jZ媕2N|֗~\kIIfWO~lh^ܝOK8YLP9V`oQt ~8H̾I6hYZ: vۼ0+T+\1W՝LF>o1{=U9!tAݷdہ[؁=j)}9S_gTKC|%W̔'>^(sni㴲GJEM1BaW)-E*=\&uHKjq#~%'PLZ$$0Z5-M|4UXvIc AxT8&|%R^ۏ3B n b# pQZ)VR4JiHeS,{})^@{r}8Irr%+P l>`?ƂFcl0!ktw)~\\XޟR"Н5~:9o{]RO(.]M-h0Nğ.XZ5'xFz eIRE:ȣ^ :خ83R03O7EgQHQ?NPYȁV%frE|Z9 Qbݵj7Vn)-?I-CR)D@ڢtUDp@q(*,N;s]=:G_uzIqzͬ,E2)JkV:iJ N]GtVK=)B6JW `d>U;-tq9=LV!/,e$kZ#^'WNfCZ֧9`1$z;^龶!bt' XESvBSBu"cAJaHH-T5^$Ri:M\|6mUKM}h>f[빟W8]`z0g9ZY[KjկMCRBQj5K?b>N"3qջߕ"#wrp)P\O5J ~6Gar>_gx@]"#9m#.<]+i!"U`lQHarSN|:?wDwM\ΎN_bMf]njWkI+ef|i'үw5%]i~BsƫOg/^'|M9ᇏUxkx맳~ZI78G:4<2Y7)O;tRDpT(cPNSD(8ff `@WP*Mr3my9,ܚ?clqjR=Eu X7ҌX{KOn΂69=gT&rL4X >1.YMPBѻ`$h!Q" 磓qw*Tf>l)|f2qG=t!m>nNxҿlfQR0ڱo1 k%ȇ[nQ o,i~ZHT*߾NX쏾h/u'C_"-#88{VXw;op$v_}J@r4ξR܅LͰ Hd>깼v{ -$IJ$ $Ϭ^g^rą 3'{@JY՚y22B kx00>\ŀ*(210:Vh I'z[5EVbzBVisy_~ MB,2Re%s&c!I(Z0o$%-VB̵ewzū\!Z0oAG"Fs,&>|?̻򏭘υR3ש 5֨TKљ|?&#MP,OS/30r\'rhz@r҉g eϜ,.h'= PhZ[{ ^t62H=@ǂl{!0mU"EiY8.D(`^ Vs"gR_/DN1e꽵ȃT=oKʸ.Jqz 1k Uz -Wљj )ɒ--{ -!`呞M=Γ%#^B(4a5Gx9$ i! APxMLgBZ _T} Q9=>Z+/B:_)BB*^%G֌p{a"IBOz NBua(S O"$ L*-TM{Kw%jA%u]߲RvY<&yzUNic_ %ϫh^]f?|4ly׏^ ]AKr7c]U[w\T{Mգ쳸K<ݻY)?ғ׺3mLyL7".kz&򝍅װw9h.ВyJ'0P1H)2_00DJ]D-zh2 \i*h s̃m~& M3=7iVjZϧ -Zgae,&T3SRF@=Ђz1 =dn!:Q -yWu XڒYm0"Fc>Ph< ZZBLZ 2y鰤mϑ Uc`!$5Jn96C(4a^[ל4e&sd>@GolڨJ_" -U^ISY*IC&9B yG*gc$IiM+Hh xLR!Z0O,s$$,$Cn6%$R!0w62%B/CRZw -7µsI:&RMaqR"APh<,`bQefD: QPn.>_7 g[d`а !N'3UBҮ)NYG[. 閴]x &14e)d3ljt9*!Zc7I%aQdL:] l+gUI#-<7J?d\K2ASnf}]z+%V"Va.*=e[?N,m 6LKzk7 =נwޯ~P]{CH9f.(*d b2ҋ3 u6e ;T%۪rGֶK_n;H@ ^R-W]}iI>Rb";W/γ`j֋V R*j',ۊڡ1 Nx?B j_F l~xUxnvJ rV"7B|,hjix}I([]V,Δ=֌TXKpi È;8#,v5>fCZ kq]t{s>篿wNiR|ً_m>njr;?wd[ 1A%5p18k-…S>{K_g5 4  uwDnC#ik4•b{iLŬi-[sk!&;/`>'FR9湠z]\sqP H넣sT)rԉ;t(no4I~ +J3|En"3.cvڑLbS$1 Um&U%@iSsQH[Ɍ܀|'!G VOf6{/_|bn1qimo@Rl]0" r >>[4de.+ Y )hǒMG^V iC~KEJ`椔\m"%dR9Ke7d͕rI3\GH/RHP} mIɤ`hŕEZScՒ*g7D'gϘ<%;xe!"&]WF8B<ZHl!;.>A*I`N9\G46,jT sg>U!z<҄_>*vE!OȰuzLʪZYB YSzF/Ir2+/EZ֧k5Abg4ښ(Ҷ;EjH">j܌UH/ XE=(ۍSBu"3#ׅǂΕ ."_¿R¿wFVԸP{{^Yy0rQ"|tOF;5(%T,xvFh'-:ó19g&hٻ޶lU~.\'m$EmƐ&Z*ٸ̐hIhA:9s~RÕE9d?r_u.̟b,.x32Md>Jw "p;|[O2o ƢOfW^ܨt9ԈQ*&&k@}ŒOEpkNE oB?y^>U 0$~0CeWz Q3?drql!TӓB'4wCEm7Ub9.Nl6 VЭfE`irJ^瓬nؼ99ϏI^ !W'Q#uBnxl參x07/^z~^ӗ痘˳._V`2&!H~ozt]mt ͻ5Z9bQSw}(Z,wfq=Qzr%/OݜcwO~܁BkGTXw߮MSEHqeXZX#mDbSPpL)L:'龣 sz!׊C{X&q9խmvNL"IgG`b-U"9#&w$6*qL46)gE+,uZ9ԙxOFk5h@ VZɝ3m!/-Jn ˓%_F'ʵYLKv9Lk.8KD 9X]REG:zKGsTI:n)֪t0#g2q2zXY/;{w;}o;v cY|RN֓}ɖV' V-+Vy5mɿQcauNauh&ZauhL2au^au;LDL=L*DDl.=~- L#g cC P:5PUYFJXŠ:%C ,XZvЙXB)tp B)|-nhXU =Q\Ѷp2=\o"迗_2`7{2O4(ā's}CWBAS 66 E( ?d;~h ݥ+g'lӂTU* &7*|XI.!Ud4ޜۛ5~􃕁e'J*fcA|53$0L'mmҋʁ wƮm\JeCӽI0Ӯ»d&kqEXeo~;T^>b[b׵s].|Ф'F`Y9Uv^^S-. //Fߌ;ڜ6P}`_ʑPb"rž[xK%_ :/q[Z_I%e1b4ʽ4c(ܪ;.WRqaMwAaauw)n,ي6AVt2x B=,lMyD)mOtjyuohs LkvkA܎u7sZiVwdZwS).0KםLs4@auNBgx (;>Q_8 rkޮ]ϲ|S< _=tf- bz_e4Ґk DƎk]!ٶ ޺ f9>}3~~v_ry'_<7ٓ7c n I,%uH,$_FRjQJY lI2L`[׽ %TQ+4Ӡ2xh,clb11*^Qy2ly q[:PѦiYcoF״I]ףV.UsMqN[ lDi*,Gjc˄L(@-D ̚T6% elJ1`Cbnj$@B%@L\/TDLHł1p&FΜYJ!S8R6u fO51⽻+iLFWWY鱙 Ci {!cA9HDID~J b:^vw!N5ݗjjHCVEЈ+p.YRHRĂj"h;/q=uߪ,EP k(Ay0>Ƴ?.f8+ᾖUA4EAt=J8b#n;Pa Y#ƨQ J1{f:7g@0461B62iq#JLY9r_KGA6`qQnƛIlR=J&/D| j%~d6*nb'.8E̮>g{+rݣTMʩM_נ'>s3*xʋ֜A_}+zj!)Q92ҫORXJ(F/qd^9C'HmOiX &6*'6y4heW7. M˳J^瓬nؼa<:@>?&$ `#uBnxl參x07/^z~^ӗ痘˳._V`' A~{ `6ZѶ]SŚt-ڜu ը)>-;(^9͒iPyoFñ+k0 Ylv6<5WQ<*NB?Zqܟīvq D0.ǝUp;Żsx#H0ř3X8{HIaMJYъ$KCVuvv=ޓ|k5M 4a?̓Ns]v`UN͗P.>o :%_F'ʵu& * ;VCBM9Yiy_Q L6Z 20ˆ Y͓thv\D30." hA z3K)NBwkCLZR(E1JmL’' 6E N\?SII^aN[sJ",# cMbJ&&-I g2q2zXY/;{<' jB>g^79ϗ^.<%*ݯGإG`GGQmcJ.8rID #"ͨ蘷-GBXN֓}ɖV' n-+RoH@ w᳷Өls<&U ۙ׀c`kecp . M3[OX4(baJ4tځ%\GHXᰨ8ΰ=V~ 4X,[նkЙzG14P;n)M;M-eVc`*yu1.4캐Z4+$uڳhCK‚l\ AN?ǡiD)]-0 mdQP3hϹS`} =Z+sg'lZBP`Y$yFJ*cahTAniv \U[26lA.N(܅F6awCJu2D#لǑ~UB|GRR: 8#yK[3yf:0ɦ_%M6YgQc32r}鬢p 섉lX;r(Ss/_:G:j,%۔84°oJiU x푗[lΟ}݀Z4rEmAl˴ i#6;@Uonu*=oMՎ~#SEN6ޤ[%FT 鴇䄲u*c8֜5+3ĀkF$Xg݃v|OT@E5XR &'%aMKFRiD5:dD}]Rv)dkKMÚ>RmZZa2w۰hيݻ/d4I{ Sٚ2D۞iqıfnL\1y972"g^[&hoɴ>ya'&jGY״a@.NWx1# +.qK.nvX%gەiҟy+CĺpJi8 _! qoS}Es-tbge?y{PMjX:r]b`~pVɚ%Μ sڇt,fyĚj;<8$ 9l,BuUʀl~n٣}^;T^LEFťN9>0ܢtxΞ6ς韃~+򀋡͓w|Y7qռ]ѻ#ܧyZYx%"o4N6_OiՇw zg@vSe$8%'7@2qr?mќ/•hqi a_-ɐx3{w|Oɓ(ktwݝj/7Ee> TnIZI~I^]f3ڷ."׺Y1*3pTh_^yy3z2"@tZjv͚ur=(rH^$F.L=xP?| ( ZݓHHX\$g&I)uMr]\+'O9B*5X*5uMr]\$5uM2:#AǹIFFIk}PIkdHKrP>)C|ZS>)֔Okʧ5ӚSngp9gY?@AOT4tIW%C&8hg8qk?9G}]ıیຢY h02#[-f^@ f(G"hp`B&1y`D邓y̅ N,Q0!S&aJTR(Zb"GgdW XLC=M\ K+|Բ:]xY蛛HxW'~8m7X||߼>x`!<0 K& (S*'s qA8K98q,"sI%^ ߤd5433ב  DJNcRY*0lYT7$$"  Ll7%q+C~&u)&) +=ĐUРMa,܃T@ 6IHװ>E0-efJKc-MPP6 YE r&@ 8L`}1rrT0 u+Hm[aRnBR /`F\0Qjyf{P!Zh󵲯L8ߴ#_Ίiܽox&0ڧibIr5JJ~F5pVx͋9;W_R>8Ĵ6Xvaz Jq7o{_j:+0c 0EdN>HA)L&oO&SP'S  N[lB\:$ULue͊\ sMe0c~>5Oc-FR"U%_ˑggJ0a@>'UuOA'Jɼk W|Tf0N>5OV;o/oWͅe`vDsbA>\cKblD`8fZ\\|papjiI![nk7CEk3؇OúTZŇl'6{׽}cUouɶV aɛ$~Hjc8A1JI]\ kR!y~YoP.\ot}w՛绗~ӛ˗]b./~qKXq#0+y $x4ϋvoZѮfMSviZtfwhW.]h1ԕ1s5 Jr_&no/ףAV Ūt́xX 6yEE?֩D-T)ʒv8zO*5}}tH~%~9H1chҁ1 B16Lؔ&&SJ`-}60ye%>A?4rkan~݇;{B> n#/,vk9rRC`〢R.]h`!e&FXͨۉ6 r@nfG2xfH xznt4q6t wKϡ IUp;o O=cF@-؄>L(RFXp4\2 w9 ^86I-d~rGa[v;  7r*nNͩ97Tܜr*n̩97Nͩ97TܜSqs*nNͩ97TܜSqs*nNͩ97Tܜ!~ZbRQaM"r5!){inj\O¿R6P #-@t˧-FAΡ99 䜃ssrA999 ?V99:99 䜃ssrAbzg> `{n2wd'M Ƙct<X;kR2|%gS*1 3<#/u_4lJa^iGjd~sVk/瘋Jx->cͨ@TwPߓr]o'ٽKiTGT:BA ,tBɕ^0KQ zR%c6(SZ8S)Dc2#/j?qz+jם.5~%fu$SziIi0.a\_u9VD)  z?p2!!'X''h''%MQq>(jQ(G%;fS{}l(L ܩr 8M(rnѫ#"( -J))$|ZBODڥ{;^N3G["ޓo]YߴyHЇ $@u S #cɔFBEX]B@Wg-P:-_ ᠤt! eQC0Ecv R7ЖC$=xaN40(B;Nxr1댜nVi4`reL X4> X͐T8HNXHK7uw]%HJ\N50 sЖQZ:;/3؁h~z=22\*밊ƥjyXEdJL6Y0 jL$xX;T-W)c$jZZj,-@6@1|< 9`:""f{B26uu'lY{Ͻ,5OV.s݁q &DOrXB7% &XoyY+? +?]|SQ &84%1EKyi-H^|{~UInЛi_(x TOC=4[ ˪wiP2ƓT U"a$O7co4lCU5l).Wg-=}}rW!'ENr,@hwfwzH;ӛ`R+3hmQ i+Zw^zRٻaNj*K'4+:󝊪2ߑ} AEge3},͑+_*uRo8dr1)CZ:cg$qZRSgRH#BeR۽pU(HJmpC:#;Ӗ{^闢,+.!3Kda33Zc6ɻ_-jTWLe)&Us.Lu5,`0[@1A(j۸XDN:!}BTn#SdpS2BTk‹z>=rw7V\.\Ƿ,Uo1|f#WG޾:Z-Vjx9 NCPF ƼZAnRth>7p9UÐ.'d**Wrh'If?}>2 jld6-8"Y5xG :aIeTVșGa\yqVG r U\H7 S½2Kj`M5Fy>,r oo+l*pg)~Ǡv {3ҖZtZ޾z0WIh -"ny^4΍ML7)gKkT\tfd\*1uCˊMRk77k [=Qyx{"hõ~2WEݛ|VԦFalXUle'4FuHz~ˌ `Qx͂VL"i^**c8ek\+tf+o&XUk+1 !6RdKH04FBnƆZGb1Yc)@L3^e%04Ֆ .@m2B@lm @wPʺ Rũ1*0)ς;O 1Rkp!(by [FiҾ6a:VTa8n%(km-hIm Z>IԠRcTpnp*% 62AwmY2RC@>xd q&zZP"R`jR$eel:@$}>έSR(jXA*Ig6;VA O= |Zps]ѻ9lkSp>/Zx XCQZjao Ab)DOC%Xe0'!HDh5"fB@],g `pc{d0EqEl{}hI>N+"`- iYEF ⁵kO84/Up;#K?pZ(Ϥ  *`D[WfAm(y 16|2M}`e44ƅUuJԌ9&)˲ A6&e") k0#k[C,Z$X@ri+w)T%U2 5!V:ΒsTB\1L `&L(턬h='yK䰌^@>`" T u+)f 17j v;n E1: Ü/  J%[&lJ%8TR0 3T ƥ "3I_N+%*25c`Hqps$e{ ` V ڳYo+5ݑ6mO_ TEVHUW&"[\oRªAYHIBP mL`MZQ($WE^a8(l-@x@It! /\9$TY;XgLф~":J"tk^Ko3fUy$' *ƀ/FNBL?m0.I۹zdWir5uuWw6yD-xRS2^x1u`KqY=X۬dpҫhAR0 RnU L͆(yA3 (Ԡ`QBG؅1 A1J# IĄ"9-+d^e(T. U e:VYѰq/. D,/xҁ:7֦q/UɂN~d} DUfE 8-&sJ uYI F`;VxG# \ZЋA MޠB-e}tbQV]Q׀R"de`y:&oC@E@L3y2Az5l< ljy=(g!4}{{{FV-5*/v C ЂHYc, $8Ƥ,*aE^Y`(#́P2c]9|*i5kN't,o3,4NEը RhěWT*m-=+k/XPK-w (afs`@E?yI޲%ZZ43pi1+[]L泲U&AY"k.nj!T8 v=ځf1h.֐D[k sQS@q8RV  j31r4 ehh3wm<%2`rHP/(7<pJTUCcW v߄+sǡ7 #x@O~~h<~(\#\ W=()mcr~FϿh &Fgg۹<[t^>-Ο)=ӐN0lVy=Dxnc_LD㪗g mXN damxd ޏ?/fޱo`ҁưjA,{ha{sĨC^e0#WAt1Nob/[Gsjnm> or^$ tG/0ehgm9wm7Eg?Q=uW}vwۖ8I0pΗ3ьny |Qs]:Zb"5(/K7m<^S}=>^ 2?l?H<0\AΥ> gůS- h8dgF{5N<`2pS|<(2v}e&B1f}@y/.i9SNusttQN{t&zޚZl;6zI|Mߣץ5(mVsg>,;=.&yM:YW~v8P Ik=H#zV| tzWߑF?_K5Dyq5Gx琪υJ~DGc-FN'Y7CѯZII,'mr|oD7xZO.t{;dsFk n|׬WwA_ZbŬĮ_b{dzKϗ^a~3^۽w 촽SXlCC2Ic-ߴZO\=Γ-.˷>|7p:}3,׮.|7<^_62dR(aJܛ6GJ E0;|ŧ5>·zw_ jmGU~ng jv#b"t~b󊜙!쑟j`&cIYq9K>Uj#xKUVmnGDm°&M5A~ѵJ;GEeF4#u\Nf?<-+=_>EcUҥ1P;TqUtFLRx#ub&H,_Gտ*b?=/GF)y>|;Mw-0ruQFm>ZѯGGQ2+imv9/NfA\x%&dW@;dS4߈B9K&2ŢbxzP~~3%>+1vRYWnkJB͗IzM^jTfݲ8x3?C.yrt }bF䫫p1oB?\Nm-^ӄ? /6M;/\ O }W[ZǣkmZzxnwx~^^ub{dy[V+ 7UzȾU (\Juo^½z/^92KĬ X7[ŏwCt.6CzwgC6ﻏdۮI΁<x,=Y9^|1?=^4Ysu,Ei0 6t#u@^]CpPNsB8wmH?avv w ڝo_5}k}tMP#u~o.IkvIKctwri;}|QJdq*胲16D]Qm"S6\żq>=K+i,/q\ էvmrvw3lN﹨XVd崲:.[z`Rp*q',k><L ukEZ4 $3 p %7ih 5Y;KWw[H8D(jQ(G%;fSHHslROpq g;jZBGB'rJ'M`el IeUnb>`{7h!$k'Cƀly T91%cQPvf]n눢 CIJJG:r)ٍ1܁Z8=1Sd0l.{}9$HݣF4ERcH1W  CƆ/ ߌ}wW~9u@>9{VБҠr  ˃ ºo} A:I9I iث0m)Q0_Bqt @AD s"tAL :q)%Vnpp,5&jci d(-W]F15{--5Nh C EV>Xe0 LP;;""ӑ"-k{8<4%hkϢԹ$HK?G"gZ?~%T|zNS"\c!*եXTrgi`E{d_8m a Wũ9*с1D&7] 7A{G^+k௵.Njbm+y/x2&w7,Z?=E*܂ ͞ %;ErշHjފ>^(}K~"IA2Q,1ą4*X'㋈*3]qv?YbckH٦vx[T6Ʊ"-5B% ,=õpb Zk8 KJ~82j񣢏SBjn}OWGn&-_r'@VE!ِO9^F_&hg^j1Q/c|^d~b\a덿P~Q]g7@, W5f!9 &ce 6pG{ɱ6$LK2TE 32R%; LlJa^iGjd^ wE@[,} *ۥE1-N/2b%%\FWroJɃMjN`u`e`>^7@S 6RxQOFSȃI;,drsOvCst򨛓cxO8u8oe7*FհkᗹA7YKmM*#{ZPn:D}'.^}Dz%9J  (loQDjɱb zx*縶JT7" FjktAq͎کŁ=ģ/Z%\{N皟Z0"O}7w)us<tA1[9P45i"_,$uDfh,`7;˜ы:B Ϋ +JV۬Xv]r2Yl;u ň@WEvuhR(8fY>$b~(ttU0U?7āp;4bH_x|(_^yG䭙@>|Cw0F8h86\Ov)ΡndKa rL #K@uro+YƫLfLi~`ge:{uU[l1 "/|g;/lniy]~,>ʿ:zE'?0挚?2x8)tл^t[U5) b:g=X{PތGwhxX9zHc@::6j̓"^.uɂ}=\BG/ƣifοA? J3!÷wS$kF4A`]epNO:C;=NO:3;=NOӓo ~oyI 0(Eb!Ycx~ZeX}}:)LDYL7#EM03*$q,Ng|rx E  ݥcB)7j<2#"( -J))$4pk# ݥϬW9&2oCeSX jρ D?=Pn&AAR"ŦO%aN;_y s~lX%$ r2?jTίFdS i׽Tη&C0!B*Caj3jX2b=6MVHKē}kUeg}KBK>ca*J% s ᥭ>4 LpnizG7Ӽ͔:gqhs#ml:IM`g)mOy0,拋]v35Knvyyj|^o{Kon^y=O7~s5?O]ⰾꖆW ֵkzCz On^Z=EF=b?dsktA,pP ֟lw V^B,`JR.WRl# zˆ'Uy6Y+냉ԧDDL ` Xy$RDH}=`SoYP-*w>3ӫY=O[T:m~wh5Z`h:etۨQ1,%! j@C%^cN|.g'vVئǝQv"0o([epBh3JB<3A?&qoHئ۴GYk>hCյϫ-&'KjA ylU8)߇y,Qj20 Jn灁)$]p#P^М#a*Qi O1jvTU0Vk ۾ Gg 56bQt&M$JA)L&&SPgaٳo%P0 VUtb E^GP>\|wf?4|362> Kw)nLD7z#[NGqq>YP.8E3W-r}F$,6>qG^f7}oeN{]u?ONf\.ǃaH\/a?^KNp4v\^Ⴘ0|I!RSSuU7**Y^,@}rLTLh8zg?49+{%hsIVZ5WN{`$c9,.5 7s {ܨ&~-=׿_w߾.|oÛ}D}޾O@)Ϫ Cy^PkEwMӵhr3W&\cԹ1w=8 J~fO>]ߤO:pz5"'þ= zu͚GQqT Q}:,H5`"TK'qE7-j׺RZ#gv-^i uq%c[$!mi T-җwq*gK̯s潜 k-)^4{2$mil"jG"T0 tNaA%|gכֹ]{oF*qq$w 6D7d;~&)őlz8 zKKI@2~hrss4:K/TC\m5_H0J5y'Y)7{|4VA]ig5؉8+"c-bftZICBbȘóg,ϟqħ}[ л02l\ ENQo1 C؜,(=Tq7x9IF埭Q;냿Q氾n&4?M]/nR=tXy׃AccFC4:lb c΅WAВ+h[ȫ0&'vb޿12 &ޙ;E0?ets2[HZ2 A.|.  W73'܉[BKnҷUϰ\`D̅3JOh }a=Y 80ҌӇh oT=#Wd&1@n3lEț]֊cg*vJxfWqbf"\?r례KX={}s!gDfz&s dTJL[S7R=E2Cr77W0bM> eWe\'ew6K ),c1ˏCiS2`K尺 2.2o&L0V}wh.;ݱG>ٻ6mڽ.TzqcNzMOm_C~mT2 IUp;oB!+cF@1gztwVi'%1:\_Bv7UIm#.U~ bKȃ>TonnwwOOä|6..tfxu>'_U>U\վ{Mzf=iYf}Kvhnah2gZ|pW4*߿*пV9ZT:篭UW\WHu)WGyrM= ̣\pm c){u%a@7d}TO_~g_y\pFҟ@TfO),u~~7 {MW~lҰͲ: >+,0tZ*d:+/?*Gvy.jP p!?uXr}ަS75]yE=ػh60񽮊KշWj|_aZ-jY&\}[E)Sg(9wHcyz7[M(um]î9 OezUy]6^v\/koLiTSǒ)K/ RH6cegSV1 &Ay}Wos௧Qm sk.ac; a[!::DŽbZtL%XF6t(joYI*N/ KyA)ırZ+n'禔Mg{Sex> )YſL Dus kVБxy*v4'`@!( ,7pkɚr:<ׁ` WQZ:G(DH@AD w{ ut.uXERlK"Rkk\B6xQc&=jCCzU F9D7>#OS;?cH;X>0Ozzdc4nTRMGVw@ 3wzqfƙ h7N# gՎ#U:EK $nvQ}|H7RAX<,\/+JV] 3 (Z|h4~49Ζ!D!e!x0k5f,`ZFL&Z igoр7ѕ&S_\/<<iay9N8,$S[)r}q2v mv:_gUy۪H>dsoHɣmټaNLϿMׄh-Je9R>R ^ N9ťoB:Ɍ ^`A ƜZ؋!k׎ڵQz.yz/[WОwYl3'ZK(Iv yI"R:"f;W7a iU3bP A|@N@ւc}9@AӇ= d:tL+ jbKFbR$S, A9p0I̥}XG2kZִiCjYkJxy,g_ëc3{,|~>EUbn&N]1ؙIAPPL'a Dޯg̰RR'01uv6Mx22&pme`w1X՟HF*LNI):%^nl4-ARRS#1 f $^G%-"Lp1̂W-UE+ϫJhR~,z{`5ՓA1E̪Cm}gZ8l߀uyTЏ-˺שu_JppBNpBS(6xXtQ9,F*K."5agyA{i5y;[ݺ;J;/{\"שu)r)D̅01#$fH zadc, \aiv=׋9~0#Msf+ĩHT(`(-MxiDl]Zoo>a_qzO?b6]@MpC^b\bU y@J B[~{KZ]IJ ?|No?|EQץ|anG昿O󖧄+W[y{æ/=z8= "a3W *ӫ=~Ҕ?"Φ7Ԧ6/e/.[`];7[7j3b9Q>̻l:] !)YP_BZ!Ւc 4 +f.~,o̓2GHP;7MsSܔ;7MMܔ;7eMɝrWܹ)wnʝr禬ksS;H7xg"`|:́tI)@??lC|({|1uzf)JluH*I3_O[N.7=5[otG~<NX2tie$<Q Vbc47˽5rox,7ˍrx,%:"HVzv/7vӛ7Kf3 B2Z"2VČ*^49 Ͽ4.T9E-"^} /&Z+(2%!>%JFRP>i:d*p?I0՜ӏ=qسd'x=3g͗1mw5{7t<rȯAiJ ~7uO[,nNIrcnwg N̘M8kht2̠53W8>4W~Cd5Ys ^Œz ;4EfUY~\ ,y:Bٷ!ZMG 7Wl Oߟ.ԡepYӞN=gNJr`jhjAY UJueI2-SI2-%Ascz* Ψ )6 =0 ʎr6ȹMp:2,"1rbrX+"Din3br1Zտ:/9Cg^|t?o9H9b TJgD2_",xɤ (!qfƙ[!M2us;+Thrvx$hi~}7̴&O/c?EX}lgH5B*CajS1c2b=6MVHKD8{;ϺeY4/iay9N8,N)J'iR"Koe,T=POERh-JU R>R ^ N9ťoB*nj ^`A ƜZ؋!k׎].jg>aƫ췿6C ؖ U`hi)1H-h.yҙ3.i $*8b!o6ĸRxK_Ev ^޾sr$y[MR2%l5 cźF: 1iL,YydcOP]ORN))qJP"XZ zGI4 G:.8=)ЉɻI,m@VK,N",s}.rsq᫧vf퟼Xak3' ),Z6r F%@!Ta5XAx;{D_Skj>Š Rw;7o8!jűʵLA$ L}@xJ&H/?_Tmk#/N~+MTj28'oJ>78pS~Eķ6!0KeK2I-GFVYz/*㙓*<:B }`%IBGWCgC K|Hn{h AzAq1~d`Fn\qY0?1zEG8v\Kr8Af,epŽ+^ /wrٚN-cȝuJ[0W~{^ kCJS8*Fe)9<}IfۀRH_m1y_4oJ]mv<οidYE_Ì}[M(J`y M&pV0#)6gF\fw19Qo[w ^et9]xzdNWց<-E]r3iD9pR\,dsk, /O+L.c2Se "OA䤤) !*b oV'PI0ٔawt(SVͽU.gZHsSPqwITmTf տױϪ>9r֦vе%8uN.6>tkwSxt^=}+Alw#wnmsm3û_/=r0L%Qg= Wb=pSp5xt.._})b(S2>"D1IqVvbzfCEO $^[[Y<{2'oR5.N>|bgE O0ʦ <)V2""&ZH0<)c"ҝE:?Cn?wvV|ss3ixO^_9)1VYE 6 \͝QFGR :Iqč:=E%w ]* kwɇ@AY 6VÖ¾O_* 9R)q{hIHXˡ =} u֠2 FwX ~D`}d(؎.2ɛzc7>jԫ^ os_fk_ԇMkc&G5UTrF>}Lr߃T5U_nj h;1^\;5h!ZUOGʩEj~0mGMFqnMA>]ԸN/8k?qQO./>fSܗSTM1w-} })E4՟8QV˅w6Q@_4EzشśPTe!e*, "_MbtF I_6w^>P?l2oEF cCHUn>,nCwZ}طk[;Q:Fp"0|*v(ν쬍ֽxCC;9=ֳ >|H\!;&א/EN簫燳zb}-?r^* kvDnruǬ3uM\rs_5j7ϷjNt2i4nޤCk9Js1TgJfd:YafGÑ'j|rH*`N@.Ck,'y;+vV~;*+P!ؚ8T9Ϯ>s.fg)7XH>Gehp%J7bi1g;Ja8҅$?Ⱥ;gEukjOޤ6LYŎ*޵|R F.K{ 9 ̵z:;` BBq04^^5L-,$S {Ԯ ']$CwhdZi%A=uPۚ-@4Gs}G_$nۨ M/HEwn'n.Ȏؤn|ܳ\[J c;z_5Ig >]8#fÎCIk\dtVZ9tl=?<;6'X wppIC{Q :B) BGA(5"e_oތ\qXt+tp‚78E3ҹVpb#V5pbߔq:FqVzHt 谟t &&VQR1qhLb RFXp4{巇ը0 シ>Jy?TFf D_WXː-1w3cC6$g/Cd,ރvDY6 }7Em~-E)@i6_LD[ʴ'ۤ'6N[s}~50|~!ԒS*-Κu/#-(aY -K2N{pO&7< m>/fg%%@Lƀ\KIڋN˅, ɜQs9T*T˓_Mu< bz5F.jabFe $^G%-"Lp1N~88⍖F\K޳-yא9'T+1!0hFZFU Bk8D+*X$VcXd0B*&Jtx,⳴N9A| B3SpErMp0B9r͘:激8cᩯsepoP|< |h{}d%oIYx3Ki#v80!F+Y0Q$Gs<9Af5Rzd؆R0!S&q ve µDX/5^#Zö%o>}k!6.r냶 ԅEbg{n>=b븅,+#0!IdNSe̳k dDaq$s qA8Kns*H!c@)w$0yB73ּ`yHB%(fEwZI%HD0 AP$@2AvJX20n}{ZXabH*hЦ@#w*!&iS4E@ˁ3ITiA FW\2 E r&0c hnm`+ rjkgd6RBr~fAM:xLTJEoVvvtm\ymgY.vqJ;3;|zޚ6,,&8<]g{~/?O[EpΦ!}Z\ TNQz} D=[ $J,N@ -"^7RP;5Y㏧9(0y}o[  N[lB\:$U Ĕ-#wQFo.\ X$rLV16)E^,+y)I]pE*n`w>SJ%di ؉K0eӶU-~߳oΪӂ7Y.\yUf~Y_xtÐh ̕0*x-շm,eŤf^n4P`̤i3]MCڧu&) 0\ zv;j4qUҍ}G=Ygiw*,d$Ḿ}944(l)&*Aį`%|⛗_&|oo^_`.^ˋׯ``^ ٣AqGpgo_Ԋv55̷*Ԣ˯Ǽu-nQ]#Q{͇ hRvǗi2mZMGJ`Wl2RN{%T\uJܕ/U"R; vh'rքI-i {D[Uo~9 > ɃOSEFÐdYma%MiB!l*0AS6,t^^ϟq}/g3-3X#g%x$ժ(+K[DV ,X:i;tmwBCg<Ѐ^f{eЭ: ƮZ6A;ʭivk6UC6a8U.O 8~ϯXKR66VE"Aʤ҃hUCBL ;=<*껙t/@I&SضB\Y"0pR|qRt:27g~WP!z po 3Wq 8Q2/A)nʤL$iTN2$y^Igӗu"\(3BS^XLi>&V?le*$/b!.]*D~ycu|Edʬ B-fUR83b˪כj>`+2xv=\V9;̀.ANލsR㍯Œ5bf{wgk½ӱIArQ-1ąrQ`$0eFA!ڛ_- 4hIS 5f0TW `!F'p%?՟^`ƉA>_ py+TJh)]]%(1g4\ŷf>W3%ٳ)|X29S6) Z)S9&E okc-6a6XoR}ylI{GI#y:\Xۛ)|iOkQ"UH:.Rfz7Mrk6GwE>X:zB&ok^[q*JEr<6'2kVˆ_93귓V&'8wT숌qX1n;c%'()'c垤Hp~JqXuu?h$}祮=ԕCISP)jR^8#Iq^+{M;X E"߯zf8Q8"Gi dy8Gu=ꮮNSpW)Iϣev`nf|pim.hy ȵ!,ҨmlP`Φ{)PgbyAt L}g}*  oKkQ|# Cm#_׿ʞo.`ؒ]6m=-A*.y / *`f,0%,T-e?,+ʵ+" jxgUHG/8 ̲Iv'Kv؈˨xA ~~s]>;=$i=o_)E ^,"ʤЊ<DJY~٭ ӚR/捈a4LXcm2;}0)&u/#L 4Rs]4l/n\ec2fdz- Y)$(HFaan .˥W uﻳoVG^ID9_mɶ+ymxv ~Ǖo{S;xՌD5]ߛ.yyw9epujGi$k$!%]It=-lWv"$ Xu4iTd-'1F)ic nLY[gc8/B`RGSJ7ZcXaLXq 0H@G'k>m:~.6lU3ҌY 9|Zr;AfxT]\0s,O'X6}`:Uh jkvkC.tAx<ϝ)zhbLG]~ZE7)]?qFii)u'xu!k)뀏ٯ]1ם۽@].\5i 1CH+:n x[4‘l_H~~fHK4͐nT{ z'3Ŀ_dQ(`{vwjӦNl ":ۣ5Z>)N熛08m: Ϊ*6BI 'G+l*ti8F)H])>uK4Z&]]QruRWOP]1N _ZF>f}?ە(Ŋ*n?;o+ҀIj FXx> F_g|58wxqt+}{)F rDj: NGh%|~L&"'YYieve7-jIJ|T0`@4\~4i:$LT $lmTEg ES~S>ӝ. H*_(֤W[,9?T;R==6zEAN8Azmaby Uh`F7A3$݇j,լJ_?Wkkd2?I i1(O#BU,]XSL}5*~exn7hw!ɛRhTwJ#2@5,v,y3Ki#v8LbzD邓y̅2hN9VڝP |ն~ xkj<c %2Y98M % CWA65*`|'"m!^}1aD[X? i\ eЈ+p.ex)P B,j" ;ytni"*)7!vVHf#U.xS{0 ܃t\JE"eM _o*7鏳u61)(RS?_j?_NΚrpVGthԈQ*I/SLm;qHQY|ݘSvV ?O->xq570$D\`.ͧ(/JRmc(G_%%>z~aF5qa!'ƞ_nnMY^0i\HV|<^/=[9{%^릹rBrf]MBFRi !VO pfY4ۍ yGecQq `ǿw_~*|{^D;߾7i=c]+UߢkXEޢ_~oEKRW1S .ҷW^̍r2v 3p-fnyW` EDW5Q꟫U$m1*wYQm:zd3`XU(ߴ'\\hϛ?&Hz' ?T$yԘ 1UDm4  )L16Lؔ&&SJ`-Iܻ !U.})søwVS;S |C3ř33 8&HYъx%ҧCNufa~5ٓ}cFCgLLj9+;ok=320pFto.BƗK -oC-RԏLG >e7ﮚMK_ϣG㏣ &}Fߦm$?#iZ͝`WQv-R9}(Ll!8x9;7Cʟa.k5v2Q֘fB,Ye[O{ '͌R]/ ; ^WI2L5PNS2 U ! N?h:;£w˓f%#$*ιC.E1FtQ'`,2"tϡT+6^c3*8N7-Jl0x;l8;p4$?&i, xp]<&lVБҠr  ˃ N{]p ANr8<ׁ` ,*haFq(-%w^ Cv t B]=T . åh\:~*"f`#E5 D4VL?(wSa逽'4 ` C EV>Xe )~Ԧ WH&'; Ke\SӁ"/[>yנwfi;iTN ]圿v4<0smCA8Um,kʹLC/_~nL3#ɻ4-;1𦰌Zk9rD〢R$OC L*QqNBo_CSͦpֳ}-'e(ߞGof ]7kŴsb&|V ec"]~]JqTQ<:2,"1 a$/Z"D4{_~Zo~~СI1mw߱>SύPy5 FxpJ}l% 7N@%2=W}qf'@8Sթ hl jYc)؈H`!,</zy;`4G {fxzix`Z`B<UX-zg՘ILe4zl5ͭ.v?UYK>wţɑon#XPD|H]`FʩRZH#v EO2dWMO?>iɎyӚ@i/w#o!;nhWlN3SdqqL+fᅒ>UJH18f,8>jK )93a_)/ut-)J6K@xF̵Ph}4͋h (L`xN0+D{##a:0-Kʡ6}AY1 3 PrlmHP>r` !a6Ċ{A2FHH5pFBbpIN Y$53搱-"iEҊ)i-a=E X>8BVQQ> 곎h/Kʼn-z KWAYe ]h40еjl :"BH+}ϴ6z:-I7·DlmjH~j !>•wbL?GTq2S O瑫~]QW䢦L o3@_rVۈ5rjqWWoZpjX=鰠΂{$ɺ6vY岼YXqwo}ZԺv~TCm;S,^`wQ }cTTD;5F?^X^Z5_+yx;OqV0hord5Jn?(E#DwDiC> 9V @rh54zQoV̌J!gTc6U r E VUBgTO>D#|jy;ð "yW'O4BPsM_st8ՅU,SkiEprb(EJQ$l%U8( RKe |,q`9*lM힩[K|Il>k=u2 ut]{zXlϤ1hhwf=!`"Wiw-ݫ=`救a:oqwwg>]v={Lƛ7zGO%k.1މBcF.KeŞkx' Xj3e@2zQ1oQe9f'4#*зJR-WN'?~> t^=aĂ,,DDꥦ0+Cʘt6~_@ono?|A7\myon4*kc&t3hQ9b0XJBԀD٪c.ѓ;{ˀ/DND?|l,;H[y3ZA+9*bYM-R\W=}bW+]>r4FUu8Ӈ?[oc9/nk.V҂}FC?hjE<]ޖiOހC/%9(0+Mt?Bn/) wݧ-s7LtgE F(E0_pE.!5.E2`<`nxEB;݉ő~c^^7L,}av)GԣLjn< \TNd&kC%n=nqE˝/i#Kh2|{W6`EgGhv,|IX9']0ΰDP;gڌEMt@r2v3kؤKqd~;Ѽ*DDtRSWr ]# fġ?ʹqaoep[.֢.^^C|ֱ3?$.'bsA] rubRJq+K qr-!='$^P 6ttݼ#W8uhW]A$kR ^Fуܮ*Rjxv^TX8Azm ۠ Y@PX,J.(q@<fAn$*Ű[)r@g#gi] 8fmgmm'kжNG{ Ѡ}hWWޗB})n@?+)njGm7 ,3A"]8_Rieg?g۲}&Gټ耦_ OP fh/#`{t$LEt`)T9t^@/H"88fb A|JR.#&ds`CX$cZadPg($z ",iP%LR#1s:C:I+֧] C\Iu^-Nl!ªƭk/֫շ鯯6Gj9`[)$TłT#"ᯠyX-\ѱ[p՞qa%pc ,P)xK,P y˵!ŐilDExûBM]A@hu+^2ߓy ~|[7s7/dT)}Crrr54CbJ%+$*TCs?O2v*Z;x"'1 A{EQbOpX b|ݳ,fňI)7j<[֑aY - k$|Skm2blLj5 }FhɁf*96#iutWغG'c-agZ?;p$ p濂L}z["ʩ_uUw pnfCe2, qS KK"yT q9EM@(K2ү嬶k,qՈ}޴ .]p!Y>L6 m͡4Geyv{K6WV?R}{S}VŋZ,/n:5=|ා ]IO4Hyc7ZtN;nMjA .v3^\vz af>p HepL*JѢ*m;aLG}fzJjU:纄ڀ1T)b+t@Hm_jRz\Tkn#02ۖ[/?7apo\Yڤ0Md)~<ø⿃RNz{;ւMo7qG]m=vuݺѴȭ%x{sbh1YUaʌ`]´b1QFr+leŋ ,b='+;-IISXFT@߬'8T'QI0]HoN}^.arL,Wu~oz)Y&yˊ9K9̻BP&iբ#tS$/8N"ǐ"E0`N07<"rĂUBq1Td/>z0;ɔ#Q&57l.'2͡LN7lImlM%4H+gWw#4>q~k>VaΜu.`gX^"3Im^"cʞ: o9 o;cJ5l%82?ZLh Vu"":)kKEn9$l\7IW oOLt ""j'i6<2VQc&Hdbb,rZze8Ejo[٣ZE[c_n뢵5fwuYox )au ?Q r&wS*QcgmI 4s09ܾ'knFπ#uLtSŮG7ae#[œv51af)"p$&d`h%7'9 `-߃u+¥&LIX8FҎYZyLkY]ln3wW"%CWi&U}Xy_fřMkY܁]NyHMb3qdVRUN0@41T[;2đ|$/_uP,uUP ב HitL*"]o#7WGe`pfK ~|=Bd+ɓ8߯zX~/mɦ?vwY$*![*:k87$IPKU$}_a+< E{C9fلU>ޅ9YtMi?jiJ/x(RQ2Br>eeA@sYYr6uiz.D26d혲d\B,,A,ZqAYň6J UAVslrF:] @FL4Qmw,fgm<\s,Mpg^*A8rF{+/ =)ߚLq,8%{x߃}A1NZcō[6\uq焒u ͼ@%C&印-1g\2w˸'<ɔ\3m5CU9}d GtP&2czr $7@aIᴐϹF>aotYķJ}WfZ-tp?=>_.J>xtM=)pbٵ1։/|F 0{47{UƷ/ VUbÖ?<}tc$ GO?\o5-Y&ֶ5#7#fVIqO4e*>=9oܟ9Pk[V\꺾*!|&e;2RV>cy5)({7s/憗j≿^nb _~w?~(=~<çC.~w㷴F`J: : lL/6 J߽M[Uޢii.M.{=;X]^U|[ckp+@R~xǷ?%l@ U,~\E]$ǭ\v{-UL\Ԫ+>R T&>|N}.Z7GlG5I05Z+.:=0%Bpy0 6$[k5w&U#驽 ״6׶ɧLCѧK?h|R=:?H>~t oFfKlF99EKf(+*Iar֮SYݷnΛ{хх>"-_u:"\lm3|t|bQt\۝Rڛ`~V.XLno;p2S@OfzNgLjG^צ7F%'~3zqc`${4tYjFz]l:vqJYyk7h$6Hʩ zHCc޸gWUe惣O-Ĥ >SEժ$XɌ[0z3sEu{C;g_U=2&<9 0"rl AQEI-u2E26ٕ-E~/[?~׃vA[AL@Ǖ~؎{x؎ yc;Gq?* lWlGlzn9쌺* +P lUR骮^8wH] GjwU!W]QWZm]]*+ 팺*Z+Pܶ+,㼪W@I'Wj<{)̣7zV|un j1azL(y@'}Mo<M4wTpw=pp[1#:UDvgt!W]QӅZz5]ԢW*_9y:fmiM0ͤz"oj1O:byCޱW:&V|Dais\2N=lJ8D;z 7^|Wh!/B)m5'.wi-[3m!]YmZbWBru}6Z\gUO0>|׃9s;s.՚ 7>% K]l˹p\2 _g2l6j>B.R3Udi !zE.pE")SL@1݆Cјb2hp Tȵ\Bb =᳇KhpZ7%h]3f*Q Gxv C'|qy\ FIR.-Ih&'y:$ޖʨlT`$YhS羐y~|7xK<0Zӳܐm w:f@CNyRg@"jJ@gr.h9SV^m0?~5>p4Τ_O;iuu!݀Bגǵq-y\Kגǵq-y\Ԓǵ K-y\KW%kZ<%kZ<%kZ<%kZ<%kZ<%k+sݥD0Q;+Pk>Q'A^IEZëkZVhkZ%ZB Uբ~_-Wբ~TY-WUIEjZԯEjQ*k_-*܁d+h; չBlhP]l:xq&M(-V\ v|Aˇ4/p=l?FX&+ѧbҁL)`rjKq+=zdY Erds8Ya2A\&bUS}c@g܎.+-Oݜ<Ѓ:Ȼe?8#>IC^}hGPDbTh)Lq[7Mlk~qБҨGCG&eګ # $2Mlќ CS JC܈ F(VgnKA9/|̨C.Y|)81%0YЊ{Gdf@9tN(E}ȹN+y݃An4cRwD,K< W>~jL34wě: %w$%+Ιđek#I %_XI" E"&^RWDx4޴Fl-ߪ b/X9j<}li8o' QT뾈?;bJ'fL5!+@8!lwH[j(tҺU1`N)ieS6p!hbȝbDȔR(IJRx\3dҎ* y~"m00ƈs;F$ \[r X7x'M9jKlyKX>gP* n%SCW^6EwTQgEyIQn´f>i 5'g\%FEzB7j,c| G* l!\} b7˜; K`]ssɩnaM: 1"Y:{ @7"J3-I1324cwS%=$ETJ[WBB9<"eFH%%4a6v9(П˺ۈmI<0*"Mt|&d^?Eӷ/N<9mk&_9w˛a4Yퟫ&n˴N)mXsSͨ)eI$MYrV]c9 `\KϹIIu;#oD)Ϣ ;㌇B۱.4U>.\RT6=-^5'&g8O~g4~0>O/@cwbef> &CR&$.L. pNH&ۅMvٲX/ u!aYicC8%d.6jB XJm1RVEx߅]q_V2ddAkT p2A"H!X(jn$ 0BlTIV%1%m%bw!4VEjUvVǾ̶;DEpg !&}6+im 9ĜtsZ2߰@tq,&YpCk+1[{ID>hMB@y})/n/E2i Ҷ*fmH3af'A̅ƒ@o%rgI{H+dHKu3 lbuZ\S$ͦlˋ}ݤH")cIU]H[wY^g0OЧ׵J7Ǔqk ?v/AT:Y] %v%=|H03,MeR d{&EgVnj_+.Asכ19ԅV4"J#O$ȯO)ҿ3e.L0G utuydDDxwL. (jC$Q DjjsnȖm<.t3p(cg.wJ\n92MxMk\M _ŽOGfcJЈ#55⤘~v/79?po:A!2x`лI ϊ jFmF AC`ߎxSTJY#x왻5 Mq`5θӮ4#mh'YW-Y\/xfʳѢ|p8mq҂o;/ 21uƎ9py Lhb Ƙ :3X`O"Xa|;%_zRΦ5jC1:%f(*T*Tjާ|_WI@=ȧ/>'`NS!CzEK0܋z•7|{a+pp3=`% j(pO㠥NTA袂L U@D9`UbV*eULSJ% IF E,Te WVE kmDý{ ί闕ԓ_5w~' L>ngRuV`ln4 ף]lD8+lax$Ð1"4-:`tQz}~F' sX(XXؼZe:XP(UgG5|=IIS8D?Y$`fTJ%;fS)PD@8wl{":dśxo_Qo>]7ԟB0>GR}Dee.[  ^8N9u1鹓e~-y&AH r$0j"J;I3J!PiU ;o#sXlzDŽh4be*`n?4T{mxӹv6lx s;noS}>}~LYr);PvQ+vRei;/Ch"mw'wh5Z`BkMXsgQ9b0XJBQEL8)x 8qyl+[,6EmJ+eyRc&gR N?*dB%p5:L(V!qP YoH΄:L(BGgWđn#'n< J}bJtfW{z}ҧ'o`arE2Q`'E[/=O,dEB zF,~j|WP/9ge[GwL!{#QP+W/SIA Q,,1˥C6Mݾk*ݧZػ Fxbz9 9IQ׷l]@[OL n=>/`Enܾ s;?lipH5 RR^є:& A6 1̼@N`HjXT)~.(q͇@7>roDʵ|8I4|U)VIOq[xr*O&n#*UXkp{&aX"p*YJqmw #m| %Z!|,>rK%bdF/cL&6gl ͜qr^f ՉeϼpvbӀm:_Q̾eByW&i pV8(@L3 JGB:g$R:":kTmd Ζ62xN:QY1sdr4H &\ p-.#ȁ1mPeLcψ4hHHub`~3!Yo]D(Ij$f4X8Ȕ)-S!))Db8N8uQ9{]81{pI#n^ 3딦;O#Mc:31kCyD(^pACzymu(m՜??U/f؉Mg`,VI 5fi06 ۊn G|*2JM9rȵVO-NP>8J\z<ɶA  U`ha)1H-hUs=Ua5ѓ֞qa%`c ,G,B1-׆C/=95)s+M'\~ {8>i;e v|OI>~Ownp*nr䤶8KUSJ2 #fTdmx;9]6oS)t+ngl7Sb,o'HA7_3^JqRQ˅׬ ԓ:(YpJ2HIK,\T˒b$NhĐ&WRbFk|RMe17i pbC?dXjMݲHOy$5z/@Z||dH*9s[9Ems[9kZд&Yms[9Ems[CQJ"3|0s[9Ems[9Ems[9E>m|-rݳm9Ems[9Ems[9[{[X˙S3 +Ϧhs3e&(}F>`I뎰[{Z)B  L|sX8+1ڛ&v/C<9c&&rJF^>i\ >|Hb,Ԩ뒲4_Nz)?_J6xm!ͫYɞOksc'{Ai{s &_|vz5O`fxYL7'I7+$ d@!O+.As[D >Vw,4e3]>]Vw% 3s.LX((C&`ZJ6NI6wf'y&ْm%n=n4O˝[pgN_G链.a:!=ސ56?$F'Iؠ=HNcv̘b5M${n˄;NҌN@dk_kKIt^l}(gEpےw@{qgs4\+ĨA1$u2f`3v 9ͽZ005-vlloYm'hYnU%ͷWajZRt `BH9Pp.X1 {|`;kHI|%gS*քCGBWm8‹## x[<ά QU#T%ZW&Ԓ?:ST4tAWC&j|& Yx>LN;o/ۧOnU S+꽼y0F8Ak20my`D邓y̅ 0@sڭu+¥&LI8FҎYXyq-1KȳyeєPqL$HR5{Mª=._bMtkԓenFs Bg:6u45P1ВZIV9S{ RG 4uPm,I HwD@4:&(fEwZI%PD0@AP(@2ٟ~JB~PרXu4b&mП )|#w*!&H ALGaeFywY4=hچQhU̲H(J!H5f0ec<0>×;u1ub"ՈN"/'5U\r|3T.8Sj,9}A$u1F-ȉka^JF|י{ft\kgv|\x==!)ۏzoH?r;įMZBdJ/_f2d2T\fYT v0b~nݛx1rU֯:yUzWNxer 9ϟ#NbWw_ A` }wF2#IJg(^wSSUTuSEFÐj$JMiBj*0Άʫj~3vxI>$G-Ӓ5O4i}A.8]?4L1ck%op͢rD1+47 ;=Z- L%{K2_Q,RT1|0^_#~/PoY:z*lo|N}9}hڶCyK2Zn)&8 Mc^ݟi##X;G?c` :R4CS! 9JHVe? ERGR:p,^aR`1JK ŝ)6(m]PHȤ p*F-T8dG5&jci\I؞kiqB) Ǵ(I stD20AE,2vF2"R~ʙ"-k鑹+,Dw3OD2U܁X܁X܁XX܁ER;*;;w`q@%wN܉;Qr'JDɝ(MNMdR*? 3JdfHAI@2HI :̑bsbbA^|JهΔwpO_ %bJMN?|wץ!.>_KхK 7~Kup? @?oGa/r5G\ߏُZ,K~T6arQS&S:68P{rf8jDuu? .%Bڭ l0{$ly,?M+y2jj{|h.տ~WW4m|EM0Pӻfr;NSIs.c~77v4#u8uS\z,ܾlo*xָrwC?`$m"RG^WHV۟v׸v Lr ~MemT+zM VsQXƽأThm"lz('K^Q!D!e!x0k5f,`SI1hFs+%uTEl\EjzI>EZcӵ[\@*R zFhrH =1HP/ !"lB 3/BF=eHjvbB <* 8@9fAn$*Ű[)rts]Dը>K[(ޟK'l|N6+G'g{"}pnM]?zdwImمS̔ xګ'[q8'gw:Ba"L0K57qF(B(JR\[ˆ+yńt(|,>rK%ބÌ ^`A ƜJ*fFbQIta6x.Tu,pFQzm:t\MRs,= ||?vmoT|uǓW +PL3 JFB:g$R:":5N6ْE`E&LC"88f` L&GarP!\0G *HcrH#e"cH4 J$$h8XG!1ߌ$H[hXҠ NFbR˵2@HZ"i-aYZEm^qY5GMuQ%zD{qI>\ZݍbBbRb*叫/Q1%{ٳ3.n $*8b!o6)#"F1@Kp3"+%%\2N@JDKV(arGd#g;8WZijfwܨQw̔sx4ؠjL" ),Z6r <K< (C<00jFE,ۢW޾MU@hqȖ@E+}D Њ3Xˍ>͠L8M<2G-̃!4z$r>o4l9g^3/ ,*GSJsIfʿLWsg*ً_'%=b,ZՃ:'\h:sk>C*Z}*dg-}IܘnK0[nF扏5tM:JX}[>oS/*e*3/nˬBnn>_׍g?9-wɹ`(pEY[EH +-9R`M u|*=9I҇/Vo`]`[L媨Yϡ3ϫk{džNo!r! KW|rt: \9r!u65?WQ?䳧 |ȳnL'kLޡig p$|-maVZ<'8-5> $Gk2j:Rj+fJn]#ɂ^ZH_\X *(k+vt-.0еu2#, ;/ˆw'IǕ[Qz}9G- skR.5x95Dp $ $$G tRug1u6Y]`fTʠ`)(x>6Q&.T^BTO־ ;ưޟ_\b}D6VoyyjL8=p'/`8[*"^8N9u1Zhiq-5,y&՚H r$0j"J;I3J!EdE+eybrT@54Ȟs&Dn[nwݧΗ<'r#iほu_jhy`Fn`a- Txxo؛p3/[z ݱg9}]pɸiTX\~뗞D')tP[n6?mNL͍ ]Ӊ뭆o0@2ژzQ1oQe9f=QA݋~(ws-`U`Z{1FRtymwˆ="Uu!Y+냉KM-a$EV %"ҙmlhqp4vw@s_Gݻǯ]fS4jx+s;܌YD7x5؀|RbvvQ+@vRei;/Cd"+;y- &t3TڨQ1,%! j@C&g#gY@X{/Vr\mS4zX',Re,w?ѬJa;DkE *洫4rVbd7m0L#@;RO$Nih Hk\ B\¹=~5꺤}FϾӷv鏩r|u|l7KO٧׵˕5qMu6[Uc[9{Rɷ߿_!s]VY>An_dY@7cWswA\VVȳ'>8UH\C.k]$sd,y:\ V# 1Td/?ܘְWm'mۈzҖ6wvۻi͡LO7lI-{^%4H{SΒnaEG97Qh8b~j:S$?7]" f?zE;F-B\Ռ]p3GBCavv"#tNc̩䢦L t`w%\Y)? gut&6.1p!A_hAL * m> _??E-O q7/[PZm7ͥj(i&ػ~ΰ ܎Ft~ӘMNׁKeYӼ7v;Ѵ.|s=3hc IA/IAI :Xx:423uO=!ceqc*h%{gqoR0F'A OK|2/c.弑ۛ>Ekm66^NrI-hق~O&9ޓ$+_f3%y {wc !O0EImy~#CGY`Kb2#8.iB_ K zqzmooQ1 -@ Lq>˫tU" E uUL`s*VW}U"7"lfK(A UPRj"ȝ "tJT@A!T nV<a3=b+h?X@Z/I>uYvRՀK[![U@\܍qR;#nu½gwAwBI?;~S@.ɞ|8DE뿭24Ҁ)8ʭױ1DjsU%]);ZC7k֚j6f8Ucl]DET<(xA䔰.&W*8 -KCV31RH;m<8FXE!2aeeh h81p,{2GT}#r. 3$]_H zm2_bIKhɚ`F{Bg6:o Q 2 dtNqi/{8ìd"w2먤JXAs(q\ ߻2eLȔQ,$m@x$)UH"!DEru|QRz,*i,ك+?4̲t4M95"\-.SElp|JI3eA}߷i_iBB%ihTĂ$%cN <>j%RIM*nEj \ǀ/J&@!l{"huGp3!֎^~>cw~|xw7>x=Rd̍^zrv9܍Kk۟n/!5+ijWJ7u˰e]fY>" O0b~~zУݛx@ԮU[n1Qv)?CFNfWv?Ͽ~݇xw(q'0FY `?e5oji\o4װҲ]Wƺ4u'.ٮ8~;9~U^;Ԝ\|{cv'6$;B=꟏LY % SHP7/uKZKċl$SF RS.\3Vb"Ĝg,(6mGZKjThm};,(1׎<09txKP'_kg։c}oU(n*-W 뇷oL"`, ȥl m,c`77[Glj޹" $bZ3+a`l/' g:U唞*U % K.9%]rӱ3O]eh9vvTeW#҂'$Jw~EuzP<9E{tu&$' !!))J 'o N-^iuԁ1̸h`:ͥ >H)QcPGJjVG(pE=FmuNEPx"<'UdEŞ(|FHHQJ#2%y⸺q\ZY @DQDS`Dt$iל|0móLqozOW`(|"(@IVhARe>MgZ$ODhsAܛlB$2Qp,- !][aD^eGrFi|N?tqi#p NvVE_p $G M. Z^8l!)a FuBFY,U;۬B:e6*\U,DRj ,]|ҡ`%I2 ~o94ypp\Gj6؟eϘqS}w}לDݙ/A.s9'W%1*HP7HVM[I-6BUkݱ?vS=/YfM53-vsmSw:E Wи}!g;rܤ6[VN`0@TWZB-J֊Qu%KTKQ/P <+s2/ bX.ІsH(pf5詒̆ȳ 5#rF&znl $ TR$2A3⩣FC9"<5#6j9tC8럶tviߚN&pjh=PhUBئT]Ilusjֹ:?wxv H"+SWD~ p4?_#DyaYjr>] N)@mR$v]9C"e) SMnpR9&hεַ ㌒iЬܿrw %M.;:8V&_1qf\ i7ߣ]/i;CCm[S7j싇ci&\--6?m.9wo'}^q|7Q*J"ȍyG '($ Ly[۽ݏvT[ACHς>FBGܡMA ˷) լ΀9#.1q]6 &eF2 ,Z#ՁHOpqsmYv:kQE}~;~xrEZ}>n|=S?/3GuS siz!*׎'z>jt֩7T$52"Xj#FxIK3J1&*ybYMYj,r\f9>oSLLz,dYjPyr`ڑ kԎTGgGnS,S4z*3GznIfO>5}:ߔ@|^!Ό}tB}z]$q:n|[z<;D~?;T?};c$^MK7GI7;z%s^CazUJy!q>Կ[|2[l pȲn)LX-Q(採&CTJ&NzHmZmN!i4] z:> Z.wL\ te\<I>R/Y hu_+iG7uN`iX^ci2O=F6l2#j2Iscޯ5ŸӬ 3Mh'UTn]ZʫҢF!I.m&OL{)OH 0lTܫ(aR㣓o4Lٻ6dWzY #&X8ukÈ؇Зj0E*$eGY俟$RD;Y鮪zg.0!fqvq7GurTyxNiu^߰J>B:o9t6 &c^R[L#©Hr&.juω +h YJ{HLJ0'3@*Sտ%'b8J eVK)C s8Qki e(^u7芿 t.=z&%5[gae.ۣh 8W=r_sޱH.{?A_휱-R0AʬQ1E !|H }6Q,q- e+ۂL(,I04p u 52 :$xN%x3r,@Ԧ{ScVR %XӒB#zf6ɝ7ow`.84EiFA۔v'b!D̤4ɒ:#`IZrVѢ&#SuRL:]bZl/j^EH01J dc9xց IzR 0 T*۴88MXs 6/ɢ#mJsQK+PzCr)&{.k˞/MO4m#H$ChC֎)KF"ͲIQ  MpT*FQjHn!(7m#Š ,mDeqU"5#ŚB)$s\gKJq6KֱG4GQaiyjzs@b0]>Xtp\%IYa|5ύw#^VQ"UkU8R͋PD+>VwQ44‰M Z'>򁓓+UMy_]U{{qџUU`v`ư%1g(ڱjlW#hF;H 5Z+.%%{`Kj`Xl IVjLmv6,T^H^ dD;}H`QdSt6h }d8+ tfCg< w[a?|z0,v:"=vjK0.Q>Mi*ݧO2: $sl>p~D}<,`־/I$a[M b-gLa93p=ݼP+R.ʠHW[[0Ua?45JRÃaѣKee惣[I|2 U8ң7.xa"Z,o٢p G{QlqG)_\n|WS8K٧.7IϺS%ۭn\{tL7|7WGwK4F!0f9pKmܣ\'இz8Zc*"%A[A]$AAjK)ԋ,䩯Ȍ4"|- xLh-+R Zd$xP3rzp/4ɃX\X2ϙL\0Ze)Ijͣ"ANfB26u~?I9TrER{ͳ `֙Hib2LXm 1$WƆmĒ0<%nYĬ }v*qSĿ>%>v?eLdyr&HDA6`$EلAeEI-ueldlmʞ"o[gDlm-w;zw3Lػ{w`݁w^Rzw`݁;w^zw`'}D;N}D;N=:#|&h_CIL8B/ 5goTQJC܈ F(VgnKA9/|̨C.Y|)81%0Y⩜ Vdf@PN(E}ȹN렯ѧR[Qʹyo>{jL3W]a-;9 CE2ZIY6DPV "IDaHĤ@^x;9zF 4Fz_o bY9kmo<}#@$/3,u{ ~>(A2U;d3kl݄J!R]H٣VQH m9/o^߮h ٻEě5NmQ]yLкAZea ;ːq"sM2'sNz6A/.)T㬻٧?;$Fd6KG^M'] ŢQ%ᙖɤhpH1A`Ɖr7$ETJ][BB5<"%FH%%4a6v9(NPNʺ}93>y)x`Slʏf -,Z}wYsPX9+mko4!v{J r3O!H<ŸZY\K)ZΔ<%6\[S*rʒ6e6ZeRvA$%s>Z3vFwgхqCuX^T.)*6(^ mcarp0}2y0i [$TTO^i&JL2e'mX> &CR)44$#\2=f]ΑS4ȹ_c(;}ڶ=BUe*dh@nhH $$NQ#A|HD$.|T_K^Ʈ˘#ϴ2P.k%-"A )b7F&Fz!t 7K.%"ug*6:2lY郕+X *\! I^"ǓTf?b/Jo+?}wi(t5'# ~3&]Lq䪧߷?8j -pQiWr8pFrP$J/UHڸfuF2|6y]h>BY_!ÀQԶNB^{|oR?A^w0j:F֢F7̵wWoK/%j_~ȝ'4Umua?s!lR~*`0ua@m(f `A3ө-@ëS%3utZoJ\ϓi!:;$x(&A9GMPx9W$YHQ 5"Vt`iX3*8bb)Zɵ,%D-iHkrYe\mf+o[X>o B%] YV&.PX9\zE~w@ozwG~*O̘Oc]7 {ѕ=[RRQG / k#wI!9MM3oGmHo].j^PN}uk t? ]ˇm - CZL*F_)?6.%>0(}YlX19B, g.^ŸNzt5[egoYxir%B9D+Fr>rA%l%A:[ 6Niċ1dz;c\<_;QE͇`}N}h#/ɨ 2h / 6:ܧإu)5Ei H49h mgMj:VhMv -܉yQ>o!9t/;|ۘόcvG)-J8Uu@h{׹z|d_2kغsbz{xG6ۡﶯf!,,xn-ﳞww^Rq8vŜ{ZaϺzakR~5gӟ5ʲ6o&>sI"Y+ͭOKnԩCV_F00sڐ !/VJ*d9ti.dW/}S5Z\}^1|.ۨF%QE<# g4tWS35vs55jU!$ɃgA\ Y*tȝy5%ytMQvM C9"ܐHڅhp`2Qf$J\ot1W[EM)?\=E';mA5 ׳vUч~rכn}z~olz)_/192̒,㛳 qC 4&(ř@VL ːKU s,PY|As10.wnf[W=IS~,fՁ__V 1fǘc5ͺ>1cuLL0 uNFto{rcXIM6yaTnHbvZMg/~MS>];Wk&H g{>G|^!x}۟Y)#Gƍ%,r˨\+\0x>%oMq.?RhziFB%iШTĂ$%cN ZǡA$s t.#bYI ʿHwRK\Ej0(DPLzOD">ny o[Qőnzf\woM795Gz78k|h9@6o0I{? }W3Oce#W N$SN]oqspE5* C%v5"WwJrNq|p/W 80El$Z]qzBD鸙\E9 _p!}e,I&Rn29!SeF"U^~#?~atQAHAgW~V8gScs O1=F1Pz+s[ͳi LT>zv='RdÑ/Ӡw?][nkdקIU'ޖ a$MXH 64 ka7o!J> U̧|-x1f?692}$Fm` /o q#y`oGF^WS?T-JL;=vϷ?sۏyϿL#8̊᪉&_#gCּq\>C6g]/}>,Rf~~W ޯO,Ow_f|ՍGhY3 xn_AImc<]TTm*BP \4%!ʟ Kn*W*xD#ՑBjʥk&e.Y &@̷qƂ"i8jMTGZRB#a&{y݉ȻyDB %,-"T'p™"1T]Wg͎-[J/NkظI>):.!|V()5khy:%* *7 ^teVPjΘ\coQ.t# !V[v_Ȳ~7P? &Ӿɰи^ ÙOL ɰ9u]9ı>O^H2byˮ-[ѭi`/*`&WK 5ܫ"*wX q VS4k.)Rk;wq]R\*Obߍ—)ݳO=%2Kng A+:Q;*+Di9*EsTЋH.R-R9wAT-b[tk .j}馷,+}:4{a zD~7??~a/gZP#ŕaZ|V]FQ]䊋ÙZE]Lg*uG1r8&4z-~-mGPy 55N)X6&&B#EPak /gk^n6?{.GJg?)>o)6%*PHtd1]e7 ?'KpPШ%-IeJexr8&&$`q,OWIolVL%)Ĩ54|sD,xѶKзEngؙh8/g>~3'$n@BHM< >p)9DRɠso/sqpJ Dz5Ehl4)1%)n$nf,aZWm@!<yGbSϱ:9TNm=q1Z &ΈޒʃKor<ƒi0q륁EA.B FDR\ .tY22uedK\!7=3hBq: z!YoDܳ-@ o@ L wQJ[N:ΕrisМӕ:+_ҕС9 (}H A m8 8JcFOd6D'Utȅsa=76"nS$2A3⩣FksN_"<]5rvCŊ>#x8&u7/qB(+Fka~DhIC[HR2ǧIC-<^{-THƱ`cvtbTrKPH-[#gx( TY[;sܦt9I*5f}ův09W.# )!Q-'4Wg ET VH¬IJn py\lwq&ʣА3A%46`dmEfFð`\nmw֝l* wZP@A4p= @hܱȴbRI YģmQ=:l~GRu܍`kηlV!]kOc5 yd8HseulsNP_">odă Wa$*ҘY@ppBAht5~ZBqk>O7 ]Mvs1]Ba^mp>qJЎZO\ᔧ>$Bܫ\;>HID&Y "9E4Ry6[mMB*]Ҟ-U1%T2Ff^^g= X,ur6%Wey 4Ӻe9.㚫w$X[#mrNrbpo|aHJwgma:;#ّ))7" 8bG/!CM>K2.^no߫[Å(y\mv}}>!H8kDW&wiN^M_#/ǗfAu|Q2r}d1C!oD/pA]Vȳ' <__/+qc1˺|uz`bLqw0ģ|_1j6NV*6l'%mۆ<4z>h :)wLRx'qx^k\r%ߨ+ a[u6ZLPp$5qQ` UxΰM؎yWJYo&,ǁ2BkWv҂DN:TX$:]xqgP^*1oK'-~|{W?}@-&^ &!S+H#&-*J$$C;'zLboYٽ{mɲ|=9]+r̮I1;-¦UG|R>];W hY+q"_N'ջPLkVH֑3c m  FXM ~W^Ƚyٛ#6G-ǸኊL2r"AT;"SҚ'lelĚd TkQ~8 Tx㩎F&--XS yjĶF6H\>'VМ^g`la>(Cԩ֝:嘹1jփ.b:z *)Bͅy^ZeЪ h=$7o2MUT?ߢRuro~Ou¥Gīys!w uy="G-R;u{|ƊS䋊2I8RG9TLK*-|92{sV7k@~PY뿏(]g8iz8n[ԋ<_͂]%"{lhݼzU~[}Ϛ' /5{W;ɰ{Ojzbz ^>H#M񤇻w;IOF7('{ƫDnٖA]}TԫThaI^`|#0DR-)y`NE߯`k3l%>MBe$ȝ "tJT@A!T nV|Q?2 QOF[(̈́Uz׳ޏqwX%u7ƙ৸ڧ74o2<]yrlP.9> p h~3,ȷj jpDs}ўB} |?qze>)spB5@0J('dLN.rb2^I9Exuh~kIUG'$:N(OUÜp k<8f)/ 1loNN?VBjo_M:;Uj>*hi19}߉ߪ9Ԙ9\1S?QO\;@}ݛSӺW7Gkww11@"sb0 rs #h:~wi[ dH#X>ժaa+,n!G|̫h8-ty7f?]79+GedIu\cp|g<}>5 (:'7=NZө=ugaK܎xߝ?wPW`2+U$(x6 ?  _ZfCs -ۜu[kHS^1cn+c:b ?~0|/}KU7Nk^1_a@|F6]_%캋?J0}'JUDfD9#Cu@` kb[khl$SoF )RPN1b;3ɀMj:֒:i߇ 3WT׎<0 (RC:(HN.N2jC[h$Uӥww:O9)m7 IBO9XBy?σIgw%s9ێŀW'V< X~ǿ-xrod"{O$+ p2s( .޾q|(NJR8: ⽐&$#\%ъ7 osS0>9@<PGP]g?mjgZȑ_QHGta0ѱ;{2 ӒZ'oHJ̗(P,u$@ /H8 ֦ͣ}JeſwmOo1πӿKͯqgo^ qB]vb·>oXUJg/:xT΂g k`tܠ0=0=mh,dޓR%XkIP=mBd`6D#^JjE4ʙ E^$ AL.zk0 )IQR!h]JƜlrvKSrrN 9?I H=Đ23AI;'2d)"emFc]dpJN# -KK, 8x_.XJI&(Tpٮ5ۓQ,I7QaPc掌sA1)O%Wy0_!a1U@ u,Ӯ<6 ԁ"~D{9נ;zgZ>k+%\|쏌:kJofcJ1a;!v5fXq9E? !7z/O־J`(9uRa,`p/BsQ k{%ƨC@@0$'B҉ Xm{-gҀSȬ-rjq:F_ݲ;+ٗ.[y|jW^cg݌;[)Y9DHkhKy?8 18QwU_lN 35f+Uޢ5Y{g,K\0"" $E7cq aU85l r'mu?KcնS2m:4 6Z'E -TIBTE` .wb,76JZr~haybX9khߜȾ1B{o)sI A}m`PDvg+Ab"PF+HM*A4U 26T@6JP dީ#jF6(bA1S1 עmSDƈ͖#H!WrvK#/7 I^'}c'6ޅ}rF9]}>/N[}|Y# iOX]ݞ=|?3u+1ʱ0kޓ59ǜ-*AB Ԙ("4&KUjiD_.IƂtΣrOn)0w?qh8'?mJ[|O?ȊE)P4R $ԓ]7G6(N>aEHAiBĶ ֘e[濭FR6lw!6"PV͢u*֫Jftr`Uhբ!~V_f;+OcT.I{η_&n`Ry$dvA:*2gPQZe)lp26{_R)҅lϙLOPkڮ>''RZ ֒rߊRN" Qơ04~Gw+6> 7>tK쬜)9SdI Y{A)"*RL:MB(&o@Ҷ ކjŤFTZU'$ PlbD>>i//Ҋq9݌:"(G3+5: -dFҰM:{5) 8 ADLب>rvި_U8،>X"(GBXA'a5[%f$J:"6r:ziCkUGN9m9LQgb9kz;Oګ!56?'NH 8x\_u35Zd^Ղ:70~›/oUo~OsSx+ʟ.o`+8dG=뼹[Js.cJpy|ѯ̉Z 34׎sZ˿7Ho~Z}P3C̱|m.>R㱏 )޷?15^\-&=f ϩ>5E}}G뢛.Z]e'|VtI'יC+M'J&H*#1jiWA}єID*D "TQڐMH8R)wl}cSTF"81,C?Dy܄nf3j#ujNx{u}yoEz=,F71+/i%ɟ_)̟ZaI*1F2ꀺXm)H"3o):t@ڂ#̢D61dC«H^)KNGfBS8;YT]ګ1krkM1wm+FG؉a{g[/CyYG.\vd,NȆcΝ!D)!EhGvN#MU >¹=}aҢ$.Kهtv ϦǒAo@~w|ʣ凛z'KpS6fqM䫏ɽ,v%^Z,__~^fE|3 hYu !6oq}MɌtgYGlfQcuÊj˔{%#g̑1G<c>GWoOX#S0SlDOW{P;UZԓTjk|?am Wyz1)B7y6J!IG.QҭEGǑTE<(-~o1J l٘)S#/a>VhN4.;|8xO$ayuAG^"u̶av칻xLڏy3֨Lᬣ啡xpؗt~mZa;h}`[n.ÄWOWaT#|)<->c|E5Zgdդ4<+=ĮjԁTvk쾘Gm<[٥'i)_A wCr{ŨCJ Q$'B҉ X>e1V6y8\rw|I:e6c8O|vۼɽ>V^Nj/uC:{nwt0 4er,gAڀ֨9tGDJY0T̗+8J3ؙJWπbI#>pZBUkDY#ډ.s2~ekUَHDH1 GCZwϯf¦]L.+y 1R!hX;țW':h`XπXZyYg~Nɸ)YFQ@Z1U^Iaɠ.֤-_Q mm;Z+6(֚HWLa ؒ$QBBOyŢX`ˎp):q#m}EkZPډu]Gs2+vmXIjzWֺۨ6t~m6A.z;j P:]@)!ݩj2Zecw@{q`d$䤀L+QF"eZXZ(Blé|1gg "`FjLA%)/R$c:EQkYkv*;[Լk{]l~j.쎩Zx5H_֥+uWm?ϼ Gۋu,k@|t ظԞعnofܢ r救顶Վma4si-mBԺl$q>/&)"IB5i-AY%R)qF#cjKYY0y%Hil9{wt@ֵTWYqLוS*w~߾~{-@ziZP^˃ +)i :&&!cuK8DM"8P)[=_@wdaE@XWh?hr3D>ZDʻAyC M4%K#rdFN޸XS'/f{-ܓ1>5C*BPONuT+WT ~*Q(.r$FRiFbaUMQ;Q|"ZW4(ǠqR,QIbdtHEPXe ,:PN|zxc6$*BA+)kI-*m_z{(a r4)M~85}BdJ54w˖)2K ĎBrdPc>OpyId9>ƧnOr 1NJU#M8E~pf#_[?(bٝd)ӈl( F"2b,å )0A|6 VE-6`M@ZX.*Ĕ-7CwV=A&1nl|ɊOe:nH#Kol /̾qb`ѩNS_[J%d5p|F?]L7\9N7-Y <|M6–/L041u|q:k_ş aP8G=ذ37\:݇'޾N?-d#jgZ`x*sU^R@apOCA3tny$XLjAUu:x, @*JR1qV{tHbaJx[Ǹ %^ A6I]TOs}XfjoK*ٶkqi)n)xH qM8p{^Q!+Ovs/g 1daz9;3A)˧_"b `q8JpW NЪWld>st8c5מveqL8lT}R1 "zm]-t5Ȕ,'>O>͜Z[>!wEՋV%zxn>*e+S;ۥ]hڦBTq `u8{W̽@ DP(%!i< v*D]%hwvܷx]}v%eNuVߍǓ7ϐ)bXI~PʃQQ?]d"#BΜvF1JFocS޿iL 0\H5CIL|\ KB5^had:);vQ+vRei)0md `;`1VYEV!65wFJ#*$;A UttL0+g}tTSovk4uZl+z4K7 O:;%f2̞0{|>¼Y@WGLX=Q|cXzXSJ=8vQP:Ra%uZf='.ws".]Ag7,v1L{?;>Vøݻ㨣t@;i5iؠVS-Aߎxg#&FYY:Ge|GNaB:饯m8A<ӫ(q&ĐkE#Z)ۀXQ 샂v<(H]Pޣz/Kp9T1u+b?bXhbE"Fh@NNun +sVA-m=3{P ^ر8m'+Oz;uz9+mO]mldɣμ[kQ9:?>:m* YH^  9 Ƙcj2K8# $g\KrT* 3fzy5.W-Bx}E+$сk8J{D*¸TsLMX bym~ B3SU ZII6wi%dd[[k%pApy3Ki#v80!F+L.8ɑ\۬HXG"\JlB Oc*5(a8z?N"?*dLFLp 3{=A;!ͧL#V$ "a`RP vt3(/aY(X85iatHp?{6r* bq~0(>t2ލ>3YSGM61zd鍼Wq%'v \^0|=TMK _9qG:[Qim֚f{2D{jxy=}q*Ðh̥2,hx~Q-˽]1dŸx='Xl(/B 3)Dgb.!P8*|*W v0b:frpiq7JYli|ye299ϯ+Ɛcl\*\T ?Eݜ>>yuٟ}|{3LɿO>} 'ϰSY@ 4XS+0|jXEmxy5r E˭١[Qz𯆥Kъ7#],Oڝ&+SgvHzD;Fcgih&LZQ[odUQd$Yb1*+d̸F3?Bs[RCwPC !7 8ah7^:#jܪlz-?ڸ#ѝr$f8G RS.\3-b"Ĝߧg,(6mGDu%5*Tņ5|=AZ,5ۜ֫HGrqvPrYT=,K^tTs12ҲvggAp#yqjavF,a>EtjF#.N4g@1G(=0:A %&2` ܙ!MD$4BOG0<%!>^vʊZu@D㢶 Z%._*|/zQ|}H")#:#/R:xPm8˔ZȲKb;KB;8OO[fdM\T/_&ˑKp\圩^ҽ'; M]QtBm(78r&5:J$F#m岣vkFR*k 5CA%]A) o6Šx@DSt9zO_ :My\yU=Bm<]R}q>umҧ=x P*|A^kmz{Z=19\8(Ebh18o3 Rry(O'cDͣ=A OϿNGi:qϞ}k@KAsψ^fAc"EMB+FQ:TQ$H߂BD_F1.@)E/RJ&?^8S֋t,'CZ)dn,VlWmo.7~ڎj P'@FX0NeQph-CkJ)bM\U !9Gr.:$NA 8l̻g`U Jn)`Tid,&6qz)b-tX4,(?u^\;&xl^}1w4v2}[jψoD_sx(XL?EDSMEĊ4"U=$<{7&sVDHMr<'AHes RB" 785I*`) g*%U.)#8~a6!uӒ-q Rq!k{8EުU߷S5g)p ^DLƨ@#IZ{hアcȴeTZe5"{o|Q4m~ъK6/U0W9{^콚FN7#x{;2ID? /Y>ԿU燓4Q/x/vC1Cg7/X 92L\8㋤edڰHxb>iBh%d[F.z8TÿZ>srѣw=m_"p.v=f)kC<_ ˻E +3A ==P娻Yv!i8;֙3Wחyg~yMi][QBoOߏPG7[Q3'W/a1a:?m珜=ɝ&ľXt  4jјU#rZmV:EZ9rj@%Tr/["{ż4aI(ά Q*l}zgO,5>J{ޞ)'v/+7p׼?/._}B]2^|oV'"EH!5ipILT[T'/P_ͤ>"/y)/#KxYZ&ed)^2$D#+z4pťX*KOuM•̐c,8,:Қ)vK\ cıv9gsw`o?i/ʝ5֧!;-Gm@&aRJwzp083\SE%Cw|8o7Sv:,5 Ɯ=mߝ\ޝhlKĹ۩vjZl۩vjZl۩vjZlg_왩7 `V+jZV+jZVx.dQ|֖KΧlcTmzh:C:~# է{k8Zo=-G^50~cxi&r;C"e`T<j[#T-6QwU:Et7@?M ǣD'n}T 'gjLIz%\Pu b8D 0b"E4"z G@ qFbDpJ bl615mˇ]$l7/qB(ˍe\A4Z P% !(J!䅿&SdNZc> AR-lB*ٌ뽔`a1 ua,T wu^]v#|s_Md.D |BT\ FA 9.g B@f21bk[rMkfB>Whh, K^Xi+xn#E8.D7L,h5]`FgbqQGf.g2v듺H5.׭>G]^ios{7?z,?~Vv)Uv3HYUgMુ<4̊,em%3C)JI=lY3GUj>ކ0#gƔڔ.(zScyٛ-7[eo(Åي (t1{G"ӀzFH1G)ՎȔ#{n,3jG#u o}Ѻ}:AzFBGܡMA()FBk?ZƖg&ybhN6a_ 4Ba=~ll}d{~Ƞ1p8szSdȻ !prx\ 'yXf!V9*QTHc,s`#$%NR^rL<-spk7oVXr\g9lSԦj,ɲTDCo3˼6_rCȄI0%A@(m؎!RؚHieQVֽjwԊ{~TDE(8Γ >2F4"(DDN @&W*BӮG\h&B|)c$|KwmOX 1,QȄQZ`4C [#gCo.M/Rvo^]6(7WɦxXsnzexŴ\ўҙWf .&!l):%xgLd 㽎J ډ[xU16AwEmϰ"j3%#SJAi4 IhEB!J.2(;nqwjA0Jq 볬<fM?,Q95.|522Wd e SLq8?N5=jڄvI"4U $I*xɘ2q$qhs) \B #Hup1D(&'"Qwss&]>>+?m|vzh|yAOW;f 6i1gzCU/_F?'Gqǔφ˛ &8+Z ԻJ@35>h (3$};S7؟LQǁ)O-I*yDg#1 fr0'7\@q/:<9+(|Jd4!ڝ:/3[ "췦J zpͣfq!mFoau}3l޵qGTq|Qo ԇRV[T%ϧG:1@"sbN0 'f1K#p4|Z݆>h!%Mʖ@ff\by\pT'c#jLQfŰ&ᗇ`@yMkVެiaen|v i+ڽ>,2f~+Ces"Yʓ#,fX8#; Old:R*WL/D? C>jP$5XU>KږZm$63߹RS.\3' D9ugHl0VԨHO0ye=lږVhJ8os0@s$;Io eV1IULg:[S5qϜjS^q{<́JG+;7myv4JDl*||y糽Q+KZ &yR058pAseǤu$|Տ$,*#66HL ۜIG4Hڅ10Uȹ}o~6ŖzxFp 9%\ 2:+y4@|RR/s"%։ AGgqz5ESH+iR.!cV%).$.f,aZހPJjIJ;Ou>?:4S:|D\ $WJ&bI3#ӧ|ylo'dDrzi FF&g@ȁ!:T.tY22u#W'wo^>ZyQY_}"_TI‘JGaXY.3ljV`hWc  >ȊuD梭,G,CW/֎@Xzbqe O)Tlhݢ_mD*{Mɻ?elfELI-|{@qMQG< ^^dXd/p5Գs?/n2|é.D0,H~^qu7j}qu7*9.q x';QJL0;#2@wE\ejvqЉP\1쐸BW\P"2RoTjډP\q&;$`d lqjUrIvW9;$2FBr]WZ*ST'Gq%YJtt<ŅP\>nrtn6a "z%ףx4_ 8QRܗeY||nS?Vapooz oG}*\ du4+Bߞ GiOcuS < z\_ayu3/NF#('{U$,~΋bv5x3i&F%A{k0ۛ{&ΕEV'4J@fVDMny/1|fL"%ܕ`, dg!a$$rWpVmG™J);$"a%Ϙ 3*+vF\ejvqT=+8tRgy#Zi4:+OCYn:>˥~௕gyo-:-?z{69'9% 1/ INQL<1_5-K^کY ^amJ;FhNg.WB =IBur2jF9<ǗBI!. Yz^Q<7^sp•Ц?=` DWslRgareLA(sd$Yc][o9+B^ضy`w,l`RȒF8-ɖR˖;QSd]UoR9W -HDyD(a\$9*A)&# )>L / xCdr9X`{g693ʒ<-L%d=X h}Yu|^Zv3,~];J$ƭ(H%q0Ejrq\]8}DtKC^(+PX"H4Ƒ@p(56hNMМ1cUbAb zAyo@hu4 {P:Fg)%Z|@vmw&G=7eLGpGm`*Qfs3DHĶlFNC}](Ɠїo~G;x4'n@VhƇXeJd{|j n+34 4$˼BuAJʍ!2Dmx"V%8X*7Y*: f85c2(PR e ng}rTWvWg9n)YoWtXp"EeLڑMStvwjGꣳ#>A>4XHmeÆʒ}Ѿ-LsdTNLFBrM5#i3sEygK–.7cz7 A]Tv~i/ חH(6})._6S!A [8Ep:D #sjYԜ5I*Fl|Lt{Q m=X*vE$0@B)⩵P!^ ǍQhH$Ҙcae!əEjR ! uٮ2|\w;#~" F[lwߪ({0᧚ ?a2ǧwybDy⌡VJp9y$I ͘b% EiG&ءZioMaP*Xϩ6%H#*p8`&,sKB4O+'u/eNF(g/^&NKeLFQ AA%܂E^2n J4OZzIHԷ&'*٦ 6ڶ~;DPYvaN՞e:5ޖwIzLWkvbP8)\4Tt)P]-p[ |]P|VO1>l| ;m:~j`9L/>fj7US b:LxijXxu!hY(QW[TGpPV؂h A,Arv t;ABvٍj̙P& @AAh(AK72 2j7Q>piԚ:\h\8Axgq.AE`j|ԖhI-i0ԦG6)I Rpju)換^?ĆD5eGwβ&W'DDf e%PoB%2̕lhIb"swx?(z۶n +b`ZZF8!NP4,)(P@q%@ *({< . *}ݔ?6 t,EQ;daCK"W,.hqJЂgYz3 T[S>iLD4EH1(ƼT[ C3 LJD ReMinQ+-|&@s"AȈj2" :SBS\O434kzZ{rb0]s|&¸ggCY.>0[7R$6{BeОbuЫF0g{'Ԡ!drNB+)'n:+?9P^gFp* w@R 'WS.\~8/@癃`8S>,_?F.B>S7v)[y߼R ݞ?*hVi9Ih \`j̞pe˩Mk'.q-scBi޺W5GkɻwCbњ,r_4_\Vk9 Gßgen\ _ BjF2Վ$apU;]fyRa8*`^żBOo_g&'dlu1ɺQ*/oOa!#y`$Q1F)=QuvN?q/ǗȎ|ûo2s/o߿{;Οpf ԑ$v ?7Cs# ڜ|kISprԥ1= ?~fOCdY7v׬~ _a}{$-GwQRTIfnUI 1C.H=`(/^ owjMoVģl$zPF)WJRnʮ bAouB ،e&j@ձaRzڑ;fQ)QK.&jHN橂3^1!)%5,*tZ9*xbGxJumءdνIg5synk|RP9LGXx9Gxp}^"~T-Ȅ@J:ZKo$A0tNkebʮWׯ*_wn:Rj4ُ`-N/#:^O`?r|sE<EI")ˣ.ParVS+,xY^y&ܾ{eDE smw;/G_klR.X^;pZ)3zUհPeӓrcft=Y:>% @B2 HKBgmcNcc@3q B f ^jz#c\x~ұ~?YNjS<ʟۅv+lOifgt˄i4"C%%}m C5QeY Y 0yL%zaM<]oZNp. ,|;g7ՠ$<-JT̤nEaO}nbRDcs@ Y2Dw , /c7.mPHۈB8DƆ*T-⒭Ak()ȇtVL=#j$Sk(`($%D<ע)r ]Zcf܎YCw4&ıGռ}{Og5Tyҋ,Dyzdk׍qe<R8HdMQ^x2e13 HRS"u2A`:TS ,hNj&?$F[9͇$G4)E~bgAWd* HTD$d+m=UsK˓Dxߍ21Z.8 ZR)JF q<}dɦ$Ll6N0MO_|56)u&k!k WXEe6*, eDq&;ڗ[*Y jS[XiXN =CQ-Srg8kl^}ltWɕAV_lNy5pot"DCTC_u:#ZXOS "/ Lf50ERk"\*LJ1FѿVfZwRt;@osoou@fk-< fטNe椣y:Ύ|ָRJwGTjy#[A-Ox< n *}h{ͬ|f%=jJ :[Zg3l p&[;뭠=8J֬FeTնܒ_^u7f(x8巿o~o_T~}8΢Nׇ  7uӦ4m[>n-yMWg]ݓqZ?^M٢˩xTG٪Ɍx\WU',6OE9-zzY**IW!^J 0_uK""$<_1G H핵Eb %G 5IOཕ#=Tu ߲DNeA2kWa: =ZF cQUWJ'iәpSB{(!gΟpC]UC(a0;i &zhy oc )eGJI (S#?:CVTQ[nt^Vk}2W5+,ruikH2<# uAƐ!RPNɸA (CjwX\]~hSx{-4^ nhi9_WO&d!htտPuR/?k0M?K]q_LK@z|bdk_.qT;ĔcUSK.obR am a3A=oyf u +];'E$ 17f[oP̶bפ%sI1I JJ`=b]"HgZAYhxˑJiЦ.T Ȕ(F&NQy!e%skKKdN64"T~TRx0dY@!)"`p 3esf ehȯK F+0NE\?֛WCf1U981DJ2'W x13)T Z&V2̱ cGvPkt;yA[+ƠL@\ wkg.l &Qo>Va~?_X%/_G]1C?C£ ه_j#zˏ?LZcWSä\/L4yKffj+#*gxA{kً%wK蝤|.o"Ru#n./h[wɫnԧ| aΞkgP뗽m/洁uzvpoGBp?8ZT>^<#2ʔ58:x%qAD?y/Pm_>}jH1祁Bht!zՙ!ءސR *Pܧ.d+,`ŸFzH٢׶L@oM!!]\Z~m *1jL0  eHKBgmˤ"zl]8 i^}w|aO+8Q}KhfݲS;ʟۅv6*ʚH2R-vM&IizҰKnqV {\o~Lƚ\s~T7nEaO}nbRDcs.pha@1KfΗ,Dqz`})Nk[ET0)v)!O"LZNG(/1DK?߫z܅ Fģm޾h+)Z-<鵶\#hcAqQ2뤤*Q 3e511P[d A 6XebF/+u38Z:L Us@j\VӌcVBcIN=ua[N֗ۓor}x 7M G$сA5dnUBDE :f©$KBlT4Xh`{!EMM(M&ێY0<ckug?b8n]AjԶ=j vg3^D= i͜Y%@YF"md&i aVڸ@21CdE aMLdA$lDSJƜa :*a5qakԯ} 0vDZ{DqKrF cU&ɀ%2 fEbdxhn((Iȴ1k% C1%F&YtC(+#b5qpL=YM.uVӒ#q*W{b;9[L@ rT"s\1D9aρ7>Y xq<⡨G4n<"}jȽgq ~ƮK7'Q)^]JeٷeMp?*;:gM ' *gy1 'R:#v<;eN;*DeBH %T$o8Xsέg:gke4탮Պ ܖxg=OGn"BGp28MԎeMks@%.w)=ŧ隳m^܍˻IJbm\pҧf|t`%01;k՚|䫄ќ,U1 0o] W*ݸR4hFVP=,(!A  p'/lR8-V[#PdGg/0m9˨sb7;uv.=Et lRѼHkDQ4 fH)}u.$2s9`z2xԃtsҪYVϲjד{+1Όtm.=z&%elEѰyF4.ih.k1A b%Do4x:0ShO]qeR7@&Fe#J*r8I$]MA#DrWrNaֱ H96FB+2geDD}:{]?!3 #\-{Ztؐ[!w6/~}"]3u~tyꕝ%1oȅmԺmKfzi8=i~h \Ǒ]ݓoOh[fɎ筯!:P;s=.}9w׭\wt~]}mjVw}аrmgAcms.t+qLI3D"4Q7=kа${owv:g BZ !LF&HғM@rΌ#'e' m[aCeHYrcrYF\DrV&n#ӁQ'2ʶ{5qgޅf:|wbrΆWB|*K~6ݥ!6zql㟒~ ?>?a=빣)O< lN# Z:T9y?ȳ^<.{bA^;[,LJm)ݮ-,`Ʋ\^%,@0"ʽUrÛʔ`Û_蓿߯3 Ő>/zoeBRz''_ˣ'v8)F=yJ$ʷ | ]HscbXz~iieW>}P%E\tjzזvHMc97mݺrb n~'`+i_~Q뫃e'gq=ϗ`K&  JWev@86݈J3lbǿzU(Yƶ;oK %E۩ȈSj `ho8wz^s^11R>zֳhtS@3:y D cW2֎K1AwhpفE_rQ~k.Yti=?FCf`餽n{oz5liM {ӗ Mŏ[|OXY *0H΄BfKRxb^D#e0Y 9pD(R.vV|A,@~Oȕ0X`A"%c`#J)JCìіySL#Y$+1R(RhH\xDa #s@H"? yݽsEyOkZ1.Qq[}N?os4# {%挒RŜ1wV "@ :%&$^ i ZːFyN/Y6owIeޥ[q(dhP?Dj[vC_uBVK5@LUAV/}* A0Z2boY%F@mQQAm6{R8VZ0~KW DȽtx+=6e@S'LnQ'ZvmV O,*n~e? [èMy"Q9 "QI`& \"QEJHW*4H`'p&pHpER"=\B^>#*[y6pUE8"i9îUcW@oX>HAwpwdMY2x?aﲇdĮ_S|O?lX64d{xՀ{3Vjgo~DQLQ Ls"4]"R=LBVL׋e־܍|ً n\%8Zn\r  e0S(]gH51k9B6J_NI{Ε^s88iGm~̯)o/j=m֕P5ʣ~/ި*Ǜ!76^ylP${Uno!{eT|5ÞRz;H`礴+ٹ,[EZeHiWl WRN,] ٷw-טQFvqgM ' *gy1 g?Uvqxvq\vqPQ=DQ=J !xB`o8%.32KA׍/Fnm'#W u^HTYiTejQMC׵ܥ<2x\\t2џtwǖŒ d8+F|<'6x&'nX%TbLC< w7'z)\5sb!5JiY̦\b4ydD&X+J(>@fg12F*I@?AdQpp6 dI\x^XTt;$hƫ.JR ѥȲRd ,` sֆ,7WmتHzj?\B]:eǝ2nΎZv>w}4 f_(e@?/=y&#eϧ]~dp/koLJ6FDT2{s"2q292!2 2!j;'#3cA$.sn1mVdXˈ ][s+|Ka$\7WdS}n*'.. eԒmOcHJ!) hk\e]83>]DU`IL )@N4%g29qw; BWF9`7ӏL};OmgYcO8tg֑}౨JҖX gAivOY'eetoՒ;BmQH\lg'$`EhVMJfѺbeÊt>{5U>nw={=6QiZ0LvR8W \vb2^+\~ݣ_5fٞ-SfeL37'J^xN|nw'|p36wG $o,n šaII". zs)wzoBZ(J1L$sePI;ΙѲbdT.w  FT̒ېg tiAMLGF7ݫs w ̽pg{ 5; n{B0>_m*@X^_^x K#)Hp*lG骴Af%ULGdˈV$ U-8:r;*s{eةw Fa3zn*jɨrdWgoY?;־z渋9n *+zfJR)-Luf?}S#9-'@tq,&Yp/azy9:p`:|:sF#d^NA9 ( ]&\hA!pR()2${ t#*}y2røF}1omCVOIVjƢ !zE.I%w֡ ")SL1]#]^19;4 Bt\ 9, C@B39EsR0kez5GZA-GjuKTa츩t^>[˝R%\Q kB[aA!MR>2􎧎Lw=JC}gq:q0Q<+4C2kTLs#h`x(B@MhI-RRzn!CA5#r!X` EzK* r4鐌9Ã<8GW$ƆzJ]L+}9<˥]V ;^F&.YwҗDʤ4ْ:#`Zhh)fcs4kN4qt5+ L۫T"+wY(Q2G0V&CTt- @OZb*)q36փxEGޔQ`ҒG-@*JjzSH=]*?~jzXΰg혲D @hMZtNh-㸠EbDzCkH٫Źi#?.m di#&zQT,fgmX nrm"懲2\JƖ?/pyℒE \ Gi!|,8-fezVznc&8㒉Kvrf\?E]/Npwz #c<225\0~j7Vgx^\yic.~oHbP_V-kлw^܅"Y\IDpjECwkHi76߯h8naT7`㞖e'~ms{|u`v_`ưa.8gg+]['_m1~|pG8 01Ҳ֑@mk=ԝì+˯d Ǔ?,Ozgc/]7sTvɮQU /oVvqe#e`E41 &yP)?bTj҉/:qxyN맟>_?~~ą>|4|. VCK k0c5oc#n0Iqշ,@UBA[ogMb'Bb_,C3]RTJ[W/RBX6 f7~0<@t^TlvɦwM4 ğ#-H 5Z+.:=0%Bpy06$-;zҋ +״1.yldd>*0*E_"l2H䦣A P\8 H=*{:GYM'yݘe؞sQ2s%p _'jgTK!0pu]Y|,K~hW9KYx H)5AdD Aۧthlّ\❊{^;Kz QZ@8̩]"Ҝ '_oN]~n=<|w`"N{X y8 "u[ª`Ww:9u1mDvY7[7*ҁ-;|@=Gc1<ڷ;jͲӎ+l[b'Y05f9c>o̶;g [/)>NفC D `dEلAeE2I-umllh#E~?znA[7WZʀxߕPyU8Td㐎> ޑ/+ _jAҭJ$9Eս7jk=9Б8,h㥜n2PaV ɿ6iGeL J=8#@`HQqѨ* Z[`)(煏u%2+oZ|tr\ɠ *fK_Žqηu"s6xiL3P/W qM@R5i s&qdHBɿDzxy7*ƻDo?YMK:7y o'8(vl=y%~`oxvھ/3*g;T`1j@28X2D56,yBZKKcH1# i5BZF!6Q; 0%rtRxc#䃑-|BY^")Q:eBYusΐ5E_MCF&smXMw*h:s`4N}LElŏ㋋/oj,{IBM-we@꣣J3-8bfN{ƑeJCX`'Sǭc 'H&awA=cmdQCRv<{(-RT+&-,u7٧n{Ukm'SEEB2ƅ%U.P3e5O)⵶TG [T$G)D9hI|o79i\/ q08[x=V ]ɖO\.ն.2yޝllo~jqw`N/R&O1pdr((*Y'4^%&Y!'mv{Cie!{> &CRTڔDF9,\2r.);lf0>O+%澰`ޱ/mXI޺t R:^1x-)Ţ|3mQ,FD,- PyP>,JUP!CrBe6QA,ln|HqQTg|8|n] ̈6"̍@*((`!eѹZR(xqΒ+ʰ:LBmI6\BnRy&k)572QI 2N.=0#g o3:ypX`^#/'{'^u$ok 3%(\{_lbLpKd1D)p'xOO/|8wʇb`>APX&qZ]ӏY~ȍ:v(َ[L{f?:#1C)FЛǓT{ `[K_?p|q-^/yqG?^Ɠl1,I\{oZBNzX_Rgf]#V\̾n~ѳ$@h*CA!qn]/>.1fyز箶ZAjavW:i^wOhEL}-.n٦a|fv99v{4/_t^Nj J2-??nd Z7(K8׍W[v:fF[k?;] 3LWwC>] +uROtkszN]U ]U ]UV;]UN?#+7W_>59߽W' o9^Ɠ1jG`Zwj>2Q #- M4F =˖ߎ?t~Nvt#-Wc3qzښ餖utގZ[L*Ow}5Ek|i.6jȱ.i2оd˿j>R)Lz+-,;LIMšGޚWc{,^ qz\crEiUǩy2ZE0ZE0#ÿ槿l޽RkCԟ2ll~uܼ b`gmN>;w٭\p]Us||ii^}80N?{}KѪnk,,wݝ)IBep* HŠV}QOJBtDWѡu3h {(ӜHWF*akc\\u69PKV y}Et/ZƭOvL V`{ϟˏOx1"yHrև#*\s0ϕ+Zb%]=Yqw@tHá+E$'u(ǵIY|sy^Zik5NI|6v~aue^{u2AӞ/juY;[B׮Z%k.a|qeʆWsJ-7gl~7^OF)K^yd񢓣lK0MXYw {TNĕxR*-Vk6K\0&>J|bmB e|I$E0q*%?Aqgl3Plws nHS" Kpq \QNm^ThM:Ce6XrN%RDg8N@33˒/; {´V:'鮎\VVCl,SIiXȆ)X]1ED2 {'_/NZ%=@/^ Rt N-`*$ OYqb`68(2\e-$ YQAQH>3yJS25I&HbQc3 d K i尊^f T=O "q2F0[eA( `%egBE`@ׅ&RQ򼵱5a@x`=hLf]1E&R& Cq-%m<DV^rf$]ݲ%7DJN7%QœA1E@Do_+,1vRl e R@AjTmuR1R^v J"HsLEJ@mpՈ\_B})Fr{%㠄bHYV "R CNl rZQ3[lj4_Ƞδ( ⤎_hVJ\1+䐜$4_=(*ˆ*N'8ğS{0%4|g;`Kc(]7kW\"uzDQĶc9MUCs<{AuP~`;@ftC:*t $]:%-ZU:X}ꘄj$;`*#/&9) >@E&rZ"d^e($ %>xOFb= o` <$^J u+c5<o- d ,:3Peъૈ;2 fT)FB]#I|' aIvoz~2>;9ٟĝEyelk}uC@FX4=ob:EjC$*fӨPK ȃ#D 6&(ڽ,C>hP6:=*Z2zƪvLp :H:U)d&>TEujPͨ<BNM PNHLAYc, J3gՁҠ  &"QhTDHr<TYkmֲȧLO@1w. -헢Eg"D|-m:ţgbK~8ݠq,Q$&,0 E*`'R% fcI;L=B:hˢ9K4YDJ h@dB#޼j6R\ p/zDV֝ƂZzopf&= ]$/mIf akC:u0q nu:i6@Ioe^ThIM% D][4quU  Ipd*mtnLoRԘf먵.h֐Dk sQS@p8,8Z;f c\`8{C{7fܵ5nA/1o*(91#J-s A%fp 3XA(<" U@%5@OO LPXoYúIA>G/+lż"0r\B\(\1;94AyMne È P¢HbQqT F|R {ֳX:@ 4.1#dnԆEB?zV19EKkLL@jXz`pkOLA I "t$ ~\,ݲFrîAp^%SCxYkʾawhU<XkDڀ@nc0`YɌB3`2/N t|g(@ɌS:`"qv %&I%d~ pi0XfP 9$U~f"|٥0;Ffl۔" a7t!ĒCO hF LjuiQ~vW$<X B#W;ݤN7gqt:_φ$ew(aTR aGpzO[sw"gZdג_ZN~igQwX;tu rʸxNt^ɢJWIj/(|4zVw+fU%/fg\ i>meg;V8٬4϶.~;3}E|u[fz_'a{vNH hcL3 7_\NG3ZsY~FwlN;9̪8W~G-}EY'0qG?.m[j_&5d-ԷI@~2`ѯ_mvWVt\NV/u _霎Jդw>`+~k߹\2N4{?I nW'w Yrk923`~ۯ_ZTuک|u]m+Zh{tѬX d8=#Zrnϒs|P+%Wp=/ilgɹ{.kv[mԛ$o/:ۏ)]Cy~KN~^6JQO0]]W$/胼[HI}9gMBTN,aSJkQq&5o„QqmdMz]Ug69EM jMw,~\Lǣ~ `Ӂ/|lX4[ n[R>;|}ѫoC5煄/z::/Ӿ{ ߃C6KF 5oZ8FY{WŷE,|Xa%}H^Xn\+!W_[+R1r g!63#Ԡce ̹{ 4Rg["q 4-4Nmtj:𡥂rwəwWctPa1ǘ^³Qd*EE3"9].-r5e N0$2Tc4.K^Ce~_X[DPowo.(575hk򋧶=eol+ti}+]eh~{!MgxlR|t@<0Q[*Wӏ_贋}W:ҶL(v>8)t ]ll$߳ ^w+Z\b^Zzc?f*|5w*eY 1,VL}tF[*NA 1M4C U̹Y} HD>Ne`?OPN7ii6.˖'ZmO?tY [3=Ӻ:(:L+CQAEV=YQl]0 pƋ=;r !|2vAjw}T$s̀ (E g_L#[&F o,PZi0niES҃! fx֗R4P{O:Ҵ ˿43O=~'odQԮХzo3/:0uٵd)&k2do_H@M4ۏܱK`XQ-goɬOoی_οŤ^uNI;_E BR4GZ=#ܑV{@GZ@GZ?- ]H3.wd%gR䵋\I>[!5""[U#qgq0s@> Ǐbmy{kͼe1zɇ֮|­T& LSX&x5s݄CA7+%%{"h'Pß:ŏ.sq-ͺt}gர*w)iV(H%jX eJZHAɼύT'}lsqn}Zf(UR.lsɥ#A-%X]c.:KQqs 1zTᢲ\Cu46 3p<9<|(jV?pw4mUGYRaJ.4ܷ=NVjLYf$g,2KP]e xƯ̆ F@hyW.y**bX!tNqu2>99w?‡(~MBx纷 P|9eUڗ-Wh.muߚ/,jS/M/]PҠ۽P׆;|r"QR8kƽ9:{`|a_M<+:qJҪ>⥜gs6Oq6Z*e\#R%m>J`K6VF̹G`X8g< Xh  ͯ3qhfn%,-u7.ǣ G"9(1rdDeL9hDЙIVflawf Ђ1 )lJc@ >ۊ؎9_EySSn١̹tBCAPPjݛL(Rz^#3xᬭ!|*YJx6E:@BAjcCNs̰ݶW,X "HQKB0!.6p0s>_mx(8<  yn UYDE 2TMIFo1:ׄ`Wۡ>=\ qJr_tCԆ31BbťHZ(+*F̹mAWG[CZK..\ CQ{-*7!TJxrkb)T^*b# >v{Yy9;WbOf/Qqk?amO|ſ!!vSϻ*;MRT iI HZCFegC; P$@.$?ʹJrZc@>%,^ڗٸr,|#6wO]FJ}faOqݹtGsNfw~f1';>! ewC9N>4ezOkzwYf<9S?KGn?Sl|\!?{Ƒ]FCq*Nllvn#S$CRW=>$EQ#ƀe3svB :h#OH9c#)e#lvU]aey|;wF0 *@- #)hg 8"UE3,DDꥦ0+CЈt˶{klhqh{_*!n7hYk6i󹹜,d'oT؀)] qAW)֘;/C62Eڎqh5Z &PfTet)mT 50Cm7&3x%q2cu]g{;:qڦ(Mf,ɲBj޺e92X'K.H`GhȀBΜvF1JJّ/Ў;gG.XU9-Dv՛gbfQzO{-^aח5_ߖVby_xWtv8&+*r!32gE0X+%^R#G.B.WjK:`x)  9X8@$ 8z,֚R^9Sj:C9lvICeLޝoc4Jd$(AT o:h9JQ'uQ(Šf|c uZ4MG!)j۲ٻG6% \txPE4}U0j sRP#KcT% G%XE+"C\u08#]&O 5 YJ7OS<`e4;7/]_F'٧0(!/mqqJNDd2]8ÌC \mkQE%cœ m[ܨx5+~b?鰨b LQah^,Y ք:lx@a<9&}麐X&ʻ'ÍqKs֛_7KgÅt8|vv"VE~ք3F䃳<ܼ#H:> ~Ml,|>~ .zê7FRҳ"=ÍbUSrkIIS_? }@-&xaV 1%YmeNTèM&G^ .l:}Nj. u|~9M'OML? wS/rGnikk@Ջ,Rࠎo蠍 lVcV7Uɽ|RI<"٘}sa{N o v Z{M_Α6@Sr& a EHb)?=N4^x”!2]ͦCVd\gˮ3-pg/#u@vNp"Fa#k51fN t&.X"zI]<'az9ڙ Hҍ]Tn]E^XA7 ;|; ~a `~֛Lׇ8~Fh5Ƣ:\sE ?VhaszN:ŵ֨bY BVbc@@+DJHa;Ѯ]!cl6‭iU"bK(F[ :aA ۾i-q6L/py\Ķnz1B{71}r)YmA?5'1wuut|yFZI:_&aQ}'˱ 9 0˧Q?@fYGH(򹖜N)bxЄޑdvSC9Ϛ!ҽ󞔺`Mғ;Y6 y-mPC^P0Rf  Z{ޥ.knl#5l"iM{ˎTڰ7Gz wې }vuK ^A-:9i˱}y04l&гya ̘EԘcxPyo>#oyqf ʁj`M+ђ:mv(D>yyWkGT\RGSJmΌ6>DcQZHo/j]X֝,kur=#]mǴiiiU3OKNtKtcpqUFThSh@«WPׯR;3t+R8/b8~G!̉R08cB ᤤ3g1_F"tQ QDSϱ2psL(QmGt՗֑aYqGkZ Uw4 њ8wBD]r 9E46imë臝l߮mͮ`[Ng"Yjdqj yIJzAhsJB<7A=t#d=Ghcө:<,QD Lb-X!'9 HXG"\JlB Oc* (@Vk^jFLZgËfhC c$I尤@ϗgM2Vd w_zuuEѨ òTEgV&ᚚ1%p 8M 9(q,cR /:|$1vMTxH RrHnQ`б4-ZI%̈``"&NwX(Tr PnpfRL{nߐUЀ0FQU$PCM{DkXALGaƲSgQuKS-M%:(4 *fY$^ wE - ȝ Q<&RixMHr"H J#5,p01 e?iGgBiH0FSf`/)$}krظs`b(`fS7}tO ?ibr6M8E%}A;nfU`@ڐA#L93)($\d xq:*aUiC(XK ŕ̸Y믡#{& i`<4>dpUN~ Ry]VW<( .8Ȣӝjnʺ[M#RI45mu0>Ө"&'R[T'o.Nj0 If\A˾z޷ #`8rZsa!PB&WtT ijb<뺣':{1X+AkTkS[9!l>x9*>?. clXnw1B\S,7(g7:uo?:}{û7zۻ7o?bNq|Xq=0 I,LJ  :emj*IբVWAzoO]]=7[ DI7CoR¬:5NqQxeiHG faݮ)vKW첈BoJʼnC(r &3i& K^R.[k;}m$6/F@1cHaHXH)j$JMiBj*0w6So6Ԑ]^|/g3-3X#gRf9%s$]J-QV1DV ,tZ9٩{Ė0n7^UV۞|b'亳Vun;#y,8LP9Gyם+ =,Z78Ŝ9s˭#:F sbZo+GӤ}f5A=*Grp)BSϘ8P=tH}+ N5X4.` ıe!{㚴ǻhu}-@1|Vݶ; Q6ܐAc) +^Kğ9~Va<݊],C)\alo*I+UR^ \*'J?Aj*HKuJRvE+J2/pj*I)uW/#=$b{WI\7pRw W>k$[Sr^ee G ű̽02ď8=ߎiT9&9ȆG(n7|.~T{61o_ggJ gE} R_E i-7fZ@O4UWaC ,E&w&{Vy:f7 M_7ʸ ӯG{Az= Dhop}¼u*䲣/ f{W#\$Dx*IIHW/$[X\$x$iw\$IƗ WE*RlUq Ȍ'I-a!RuzXF[/|ޖS+ѩ L|9sŘ;+1ayCt=@·)><)y8e۟;fbSrCkmH˞=H}ȃI6 l >F_%mS=CR!ii(Qq3U_Frߑy>&v̌~q#JmB朤A(euFu^IOrFX`"/ւ~.K9-4] "x5dJ9[i4V}+;fJM3K3{M{t}ooXfABjDO_UV?;96pbU 6VͲIG%ssoO'3Gotoq0YId"rI(xbF8YgέFr[12geD G* R1pR!8iJd.s>w*hvFΞ~|LO?4x> }0u'׻^r2[>ͳ޾DŽ]xQbwV,pqzq ?r^ʜݽk{$~ku >u_Ct]zetՌ>=lǤ1;YOH#+Y-ui{3 =%3d|˫Ky~o8yh[j+\n9tߪ11Oife>P^-B;+}rmg͹ղ>>_讛w>_h|Ձ펚HX&]A CŒd$lmvm#IyoBZ(J1LF" dG&%8gF#2D =A€ʀYr,q\8-<V"hnGG> >V~'J'tqAmכh~hv c^џB[o ܓ|JGSD$>*W 2Rl^ȐdyъS lN# eyTֻLTl=7F$T92߱wFn ވ9.{nBb嫷YmƔl+z,M{rڑdv7jG##6PKF#YɼdA9 ( ]&\hA!pR(m|> t|=b@݂0-r:8E>7})z:;ԌEq0$4CSd;'( F[LȴnJ`0S+H%cP$Rh\a]'HtFn0)`GqFH>c49+ӵv,Fj,زǖ?ҷq:yyb# %rz 23SzZUOy6SUɍ >c@(]&Q(GA%Dt‡kN4O> ߟ)j;F)g/^a()U%Ղ@#%Nҡ#^ iZ˘FN/Y̽w&7^[6h}G]D}(ڲ.G'#ԅ/t hc5jx[~$@zhe Bϖ4FttŹA\ /jeWܮö Fu[ ~FM w-xV ,4Ӕ/]|aꐔ)^L>V:qĪMܝ %oO/^3xfj)dQIݥZfbPR&&1-|iK5Ś$| w 5gU8:At6Y{4N)SŔ)nsRFg¬Doo̾V=8uF.}VwS 6{QZT_zgZ6y0mJǿu"}d4#`&WsD&|Y^!eD-nRr}#jll]jm>&^wG.K.Hx8#ǵ0օrG:W8N 5=x`I-7h* {TXMFc~»D6YԞ(&_Ya9N5х<sˣ4N桅Je`P 56%y6ͧקQ}eFCr}`UUUTNyˣNI6¡j##6;{(j)B!HAaK ;=*sVfIbunHp[-Ut[t: JޅT Yq(&-syg쓇Sc u3_Krm?}뗷 S j61{j;x4^mlƾ `v 7ɼBTO+|F$>(i@qPuB="% n`]୷DBML9Å8c :vuFΞ%Vw\<1N &YY`$Xl#I%0FFL1_uYY~^ֻUK01J dc9xց IzR 0wq_*5C!y{'|naݫf9YtM&-1~ ^PO6>2deA@ϲgga8GSM{ Rh9kǔ% df٤Jg&B8` *g#(5$ Rpe[6#w1h!K1у2u:p@HMTf1;ksasPjmnKʭR%[$moN[=$ l<^+UX1WS^;'N((Hj8JC2x7,p-!g\2qrNΌxg3ٜ \m5CU9}d GtP&2cq37Ujr5a|_{g x^'sn0= N~H=bP˿Zի싏*^ԅ"Z\}EK.翷5N4‰֦t| +,y7A3E[s7>1a+b.8gͽޮ9Gy'~sǫ;XJ֕@m^mѾԭˬ3[I6[|G=Ysx1oU۾rmU 7oVด"i2ejR IuuO`lTOO3p\9~?Ϸ_osa߿Ǜo~M'Iga:i#IqGpoejiZoҺ]/n:[ֽ/>ژ\ \}~=N? g2^/7uݠթل`yljWMٖ&$z8#ꧻT)aoTIH ]y1[vPx YY^.1;_jH#7n#%\jBh9, ΃aY'$YMb3ȝIΆʫjkGOF$ӀxX%&)t6h }d8+憹N't*-3x`Z4ݦE/{PC}4gۃlDlU}γ8NEBrMF9GUXY @rY:QB\TpUJ"}( X/7(-hH)&LI&I&8$$Ι+2.8\+9я n帯 Pf7bLk}n~2c ^ܶ=N=;[rN~P)jRg.ݤ *筭Z_  xr-JAYQۏY D*][I|2 Uң7.xa"Zu, ٣>tDnޔl]JOK|=rc3uFt>uitډVlz8БTZU O[>(i.9+Ti- Dh.;m2dѶhBd,PYcmBv8\  4t>쒧"3҈I,41qK-h%UC qMk#͗xt  Ts&#^YJtbZHrJsEeއ `R@vq\^lB&'u&/mULFsba4p 6 Ի1[d2">  L g'07>8A@SR`َSDv'gQ;@<dFRD@MT6(JrMH0]$DR^ƎHƦĂ?{ȍeO3gE4|FmYR$nb\Vl٭eږ鎪uY$Ͻe~Z{թ@w[nV:s /xQ찮aٜ*9 VQ1Ce*$Se>ց;$}2wsr84FCdLc: RRҋ,ŘuQNy+:zp ˩.h[SEКSDpJK;zgB|i)g7Z\LRPD=ZTu5eS~y֫y㜜`ɭ^SWs`u἖Jh;0/8B(Jx4Z 5%BKB,{.*J!$HZU!}.B2*I%XJ A42gsb\b>7/23^SgXLl|ήguB\*;/̈+CjМ1 6X! " ЩlnfА8~8ǀeyGT"ٸQ^;8uޏ/(h2ޏć.񏽓_OCxmbգ8lη;zS VyӇep1Wg~QʥO""q㤤K,E-$\:qnAw/;f_j%jy뫛ϟ[CO_Ź) ^DLƨ@#IZ{\"Fȴe'[Eol~#Ѣkoځs%bd[o;s c|0j0a.ġR@  '@DeVH%Zi13R=u@ADZb^‰aQ* 2詒̆YsېxVes3/9ԁOn䣣%)ޑb7D8 O&.c/{0i|I)aoߡfGW(h*+UVC+R++&WY`P\CUC,\B \B \J:\e)cW‘?&B9; \eiURvp*ʀ!KwPU[\mB}p09:`G[oQ%O﷿7ϛ[~!N({o)}#4R+o޻+[b>%t(#oG/F N朥=sZyX?sDw볬\% ]oi07֞ +]>-7O?;)I{b*'99-"\@6G2P\GO~xԤceHȘc+nE5OzIN5} Ǘj 7d "Jq_~ Ó^Kj*ש6i|ܐ=?[g(bJKũ>UIOLLݹۇE̜M8&X f?GiW(!~*sKY NROԤ@4MI "!Fˌ INQLSt AE,ŧqq%zWYbT#Q~D[CxQ( G(3AY穀s&)ic6Y(!qt:ꚃ*躜H<&څhe3(3Y΢DzM~)&Ύuצ 7tH0{XmB|m;4χnf{_y뱤d|y 9A'Hn9po')jڱBEhbI!v0^ 0HqgqPu I"MG`FEBeީ.Jd*#\YVyH/g:VVcޱ6'jվy-0`X"ZX*HL+$V$M@xʂ뀧 AZ[C Rz*l;$Fz0)LjLdl$ 1FlN7G9됧dn0adp_9~'E"dqVK +hP /% )(~ >uت5w~bK-֬]G 6DJkv]@1Y/ۛn/T5a\ rR\,'Xb!Q@pF8ƕ'Tq{% fȜ4iPP"HpURQ`BEwX4n]8( jޯqT{FeGO *jzvIb;"Fnl%6vj^4=R||\^}ןύ']ftq:'%d .ýqt0?poveyI李&SdgKo'̲'̝[9菽_İL\-I#s`͗jJjmR?CExa<Ǎ#N_r"k7L7NKc;Mc~LWmqį db@: iZݿiTZ; ퟛj"b4ϞX ftEhUEAV[{{xhͻ<]O`2\Km"7|{Kϛm[>۳f覤vVQzË*qºϝ%9 QBY@, i" 7$1frM")#ާ`'&,D*$*%APpmJq;KF`<"qiф%sx%1ItW 1Գ8<nuL1dUubD0/53/dJ3N_osFzI,R"sY%Lj'?6qi5xJI߅Ц,k*kXBV@CZkϋSZ~BKc=w0z Yfi<*ְ@ 68gaA]o,0^}`/`hȩ.IWqkdelHUv)xƹ ߆aDyxՙ!:3V3iY%1p8=2fۙ"^y2P? U5vÑ8+@{$VLk@E\,ZNz(#(y>/zΣ{V+!p)OIQnudC; N-Т7 edZ8B&GCpظ$CGųD*䵼}gOqHJ=,Q3_u׃s"IRzpH$Zp20^&uQoZ#g*tHn8l;7WZun[JkXgAaCK}R*nx|HބB$djk/XTArCoؓ:8k}@ PVDeα@-Dfhf#.QjlМqBsVpJ&f>罥BEtՉKYJ{ZFajm>$e\oK?Nn0_(xbo"U2GdMjvMӵ޼mf2e鐮M@JҪe >h='B2HΎ3N.[LY|e6jx<}Sg\7wק~zϺ5 GWLF=ݚy2yzuVOd+Bo6wyvMO'+H||nvE40՘GUI%`#3;۽v78шEC tGܣM1:K)ъSRkweJ2_4W]QɊmf<pwuM2#l/0LyT!Yʮ RJUnqF'MsġP`CaIJEo㉝< pB]^V2.%t CY | ~ߟθ6ݩ7/ώ6y 8;ڵQo}Fw%&CPoNבO 8^O6Ҵ_3룪yytٸ=;5|zb?;zn^y$I2wp("(TUZ&T/0va| ێ{s_-RmMyY^ E+rm^gч~YXG5^Zs vyqFY‡IÏ0%tԫg|͟=l% b@}|6rml{\h6#_T H\ pPV؂h A,AU">,);=˭==vlS [mR <)DP>y-TőJeȩF%\ԚȅƁwUP裶4EKhy5r6FB30S*ML}{37]pIȷ7=6eEwJg&KLR,\MP"8\F~Y`ap"r̡*gx/nxgduV6L#~NPD2V`FYYrYy,*j0 Sº 3R4`q59d\Y*vG)9&rYsٱ6:XiAB㓲D4EH1(ƼT[ LI&p%"]Aڦɋ<`jWZMde2" <+0)7e*ɵ i^R\`T s!\73(kvwzaܻ~}0&݆we*\8gxP c$v ev?8z 5{u8 |A%959~甇c7?xu E .K6\M%SA06ջd8tb]O򌜍+WiRAr!] FOaǜkV+_C=qL6QՆ~כ{^x>?-Ww^7^^5s>[nc;9 ?ݖE^\dp/ap&iSNznXs7\5v3,oQ# #G1k~<УiWFV:~ɦ^U2&7G0ܱDܗƅ7L\cVgQW4'~۽п@v˯?=WxޞSf_QGL1wD;;^y׆57Iתͷ|~-i}(>L>R8A|C~\Юn SYPv$@("URb.@&H3`( 4e*LQykz}6=wDj#1H!5JI SnPrXB@. i:Il2V5 QH0^Rz>ځW E͢0Rh酣%#[gU.1UaQQޝlysPs[-+Rnܺ+Uwj`xw{m`T-AozU]u\z[4y#{ȍ{&^ ?>j`k\_Z|ƥʽRQRaK-} 2'-T@l᥷Eb `4CJ_BA k Q'RȣRFe:%*EKZy]Av <丯@Yi]ج澳KS8Â2b mL2F*@Q7xpcI[9TXP&`Ċ.Ĥp 1]ICBhKFJHȾk6 Z.i3`J$:#.*mAX"% 11DҀJv!DF9#~?PvW( d<)#& "W8/s2QLtGwc= @$pz ES*h>!cX22D( فYGQfgǎ%uщ L Nj5Ʌ-7L[+P%!:,蒌T;oe Irdeh3AP k"xt..JtE*u2v@2ogE~_}6rKY&)7DE9y 4t@mjvHuBYx0JHa$hNYwKRXJcW*%VŜ[)!*OGF iQR;pz2Rл*OLzd~vوi}۠\$γt0xO|9YQK4"ĉb|8Qt(Q+%D(2W7M&Jkfω۝ƿToR鬙Jlv ɕAySym}PG $P5{<"2r[U'o mʝxdz`s5\o; Rɹl?#F'˜ +&8/UBp" FR"H4mRg紑+WFWf7=bC%XĨ4 Μ6"@b.U%H>$J")!d`$POAG{&<[#g=F[-rZ̻+OFEwK]Ⱦ3ÍFDCA'J=$'կ " uQ>^Q 3`Z"'}-jH  Pc6Q`v@ ?I,Us Q¸IȄ*Au㭢N Y!j֭T,b:Q"u;˝^iT˨5r6Q[d l@ǫ/?RrLzIըm͇<>*Q$B="=BAe2U(~gIi&ЌKn Zj^F51 m^PM\Pʐ'[!E%m!ܗBGҺ0}͖ow۲Id}oMKƬp.ꪼA=| U!(JGBT@r-#K1JQ4Fh72:ŘwreLvBSH@0H,:igU?]nG^ Lya=#cdžay0kKHaFV_xIi(2*9AJ0HFfݹGf\6M7B7`bGN*3^>?دlߝpv>K97? ]LJ?8bZP7Y i Q)搃B(d} m:.PY셬QiUaS[4,avcQhKboѪ;w#vN%-:~訔Z,QHr֕Y9HȤE-i/"0HjXRF+dfXE'M&+HE'đ1?1Ģ\Yw^/c*0n "6M#"8 m2`ILt jV&bM5JFd(m"KBa6&ZbNjT9*%JɧJZIHmcDl֝{Lȸ8εMsͬdC\A 8m0dFiC(> @d % ) JThHYǦx ͏aJ ;kgtԏwe87D?>S5NQhYxgki}B.Pve!$cQp6PxM!'$D s;QR%:b6KYO`$.ɋtm4n%5jފ3t4%ل2aB1'd!%8T*u8Lɓ0ǻ]bؾ^9)gI9gtIr}q/!@&w$I di[ѹyZb`B^QeA|h( ^J9 By|*AXjG~?%1X, YTARآQY&-Xk0'm!:&uHbA,'ʩ>@&3L$L%5ʘ{O-*\8$tjZeX67ʬIGx?{{}? /<NԩOIwP-hO8g?#VPcGܯϬ^vg!P'dBjvg:ٌ|gvxCcǖ{+([U*.CbtnV ~ Ki߯^? }gv?8 ҟ1o% vOT)d }{',nry/nMNZ ljQrAIޏ:ST?Ϗ|svzebeg@8'V9 Njx&GF䇳>0Y=1M^kZBm -kFoF۵\6wř& ֧XGd||г6Oo90k[UV7uj7ڰa$K30eTjWt>1__^Ժ[.j&9d~x?۷՟wR7fox ?>κ.8xp~~X鿿M{ݪinoٴpm˻^<{-iKQb0ށ~?o&Y{~ţ.\+p=;r)hPXWJ'pXi3ӳm `jf0m)cӕߑk7]xnUC3*l~b>yt@%Nѫ)MOr}vv1o׿WׯQ9ׯ~Oކٻ?'ӏ,E>T0+xPwQ'|?}zk.> ~ V}+HQ~ҝU\^c^u:%FHGB>v.Np{Wp>[G{lX}~u?t[\6|Qˣy$d270zye=.VѴ;#9UWk:`myLl:eEF"ȴLcuCh?4?4u # $&bWa*:C+RXŃ|Zڤ`lG-=-,Ē.`^ج3@@I:bLӥHEgXөjDf.%uuu($V!/4^lA%Ҩ.rC4yfX+rh>ue?''ϓ5yеuQG*q}cms҂5ݬǻ+l`YIܕI_sSP].r~PϣL=҆e]57bQ$!s ")ɚ\`/b"3ESk"% e-gn)cڟ޽Rn`['gTNZh+KܽAg|P1u|8- f J@.]`rZa+F4YRr**RQJ`֠\:Ppb mK22 ٣@U@o)"f+|9A*f4Q:!غBUͬѧ6<1~~4;4U@29I|nmEӕ]Z5]?8ؑFdsv44.23aBZA"y"&H>.a;UUժWAza-|:sGdhdɞ?n4cE~ (ڌ5Mb󌍡PE**5IM‡ ]_WiÚ>dwIS:]nKIl1Mlʥm%k:3tN+Æ5AM~U CR-| pksutǬݲEL?8ۧůR^ݏobKٽ dbbeGAwQY)Md0/HșȜD;k,RY"GLZR!S,b5LyGOܝYctҖo<ސU-8_к䣧n3wtd۴tш0CeIxأMʚH2 6"h;8o]?Zn5")O'`p{e~;g7j^l!5$o\y|T,^iݻi7ۢ݌nƇ `J U(@RrƣՆ(A|Ff>*dUE(M96,26!i E1&ր , mbLR"(3-*6Q9.9b+ElqUrX'o zu5S3/;Y)[ΔzN5$|&%ZלbΫޣhmY7A):F.U@:L+lQ [:%J봶k,DnBiYq -H |eD*ݔu:5Ey##Q u̬n#JQH] %M.*ۘu6=Cn-6G%5ѽ _4ri=27DkMEf5 2a[3͹e+9י\z_7J>yrrlAJ0HFfݹGέf\6M7B7`bGN*3^>?دlߝUcv>K=7? ]LJ?8bZP7Y i Q)搃B(d} m:.1**lje90Nx, mIL:ZunĎI$1YǶP{`fQk% iP"YκR<7#R2%E CMBH`P1dbHd8R-H-B̅0&Bcl;Շe}{Pa^*><#7zO5 :N(h-LJ~hf맘GVYa?3/P_a %d&pZ$wZ'>։\NuVcok֏pr]6WY> /AA H^[[Y<{ꪸt]?*:zG<[މ\uX;&<l7(`>09mG pLʤ^g $Jه:?܂lM5^HfޤoRc7Ûzcj8ˆs{%Ri-v#$I$9APÂYH;ײ%2{^^ Jq.)bu gR3\M˺=ͷk )%Xe0ZAɑ!gN\sg%FV{S hqeIDU"x1rXTvB V`ux KL4bMbPv^𪔇1d{&߳{lznR{>z^L,VT CRc9#G%ģkd/)t,\4֝FxbcR S E1'(AcN,CHCkM)I%RR(deW"mVkNC>vٛ,Unl맩j7PW++¢"֜Cs9 $!PVEM(ƳS<(xr;Өz:i4bB8m)ʚ"pTG+-A&6!C3iV4O__IvU5K}HN)1jQ@*EHo$[ OԷ& z' º -dv[pH`hQ;-L L-6xj)B^ 0ոPFAI*Z"2ZiI|" ){(ג:%,U׮S㻫}'Ü&i8P% Gc`J Jp:+o 0effUF_4\o{!zM㔪/d Iw1X xXuFedbN/^ͧn2,o@1LQ^OP]w[U*Ou7 "O%5@4zn jz{uq+jX'Wn/c5{|*&|™PH|/'6Nx~5jU&t+*WY|xykZ4ܛR4ƻsˡoQC_=88AIZ 0*`Jj[FS8M1SZ+aZccQ㎵bGkf߳Q6po,_w>,'^f pc~@6~/U(/m(C9򜟓Qyp<Bf ό)f@(AnJ1/QѮS(>3,q5LlhTP_g@}沺\賓wihATTJ%DZH: ϧ%f O r a2DRQYϥj}i 彄nzugexJ'TcMzR&z2IP>(F7ӳR-ڤ+so&e~t5\7R=]m_BPm3zU{dITf&Y)e֐8_ʜ rCF }{g8&v^T52TGO,OHMK:纲O>Q|k6KIu<ʛXyv6(6̹ o5c: )iOkn4V7,ؙw|6ӝ}_{s3+.r," (90Lsk" I$1T^K' FP/ \a:9[yZW羅9賥ȇ9^ p}`s"Fa#k51fN l&.X"kE%t*/¤G̶xXg& ^Wpq[RrAW e+OW v !(=(^1*=3ǷQkE#Z)ۀXQ \W }u.չGTMSE5M5FsZ5`5쐴kG,XY$bTF bDvhUj9l[EĖPƽuH -4(-rɩ*17{mn4j,;1>¹Gxe_~mV..>^eL:^=3W+f/rWm^1B9 0'Q?fpYGH(y|;AfxfWm1icmf0,PRa77n?WK6Qi6daR>pe!t`1J_y-6szӣ>o`}Up->-2n`@bXi1?j(}DX߲q UCaՀVSQAT\sjRD&HJQN)93Z7D&zG6pJFtaUPV6lׁ[O ti=Ғ ~oHkʰ7*{Kþq~X(@,dsk,l^[eX@,ZkQ }nt;ɃWE'%8D?(jSG+M:\֐At} 7z(+/T%6̙ 0uu o?p4) Xҍ-niD]>2nۜB*iauJf%0|:(h;76m}u{g6f\=|| T|KYMa⪻=WA φ|,m+[\~t_!ɩX:WøQi5q%&%9Wb"WSq%XuGJ$S.OH]%<u2t2*QK豫DeQW ҋDb'jD%杺zꊥbG_`oki$rX9/]/ǚ[;{`e_rz /^4We3 ~ssKRmu%C1B Kt:4\LNEM'j:v5iN2<7^˨|Ya?ν RS 9lTxlwօB Ao*aX`"+> \:4+VXUl+/k (X{fO϶*rv̻.}Z\kDZ±˩36&C}6p`auB0@D@ zm  $3rJ*NG]%rOr]]%*ղ;uxCH̿Zy8eY65YQų00Gg 3^f {)dgbl R#BYn991 ;(v\Ԧղi]v?{Sխ&h磾\2Sgi%r,ɍhH1hWS]d&0Ka˔֌GaR% hz'#M(g S}JJnȷ,gk`S"(3JX$vpdh6$z3mV"K$[J5Md) cWs"}bri{55ͷg㩆21aP>ؠÁ 6Zɍ<`t8ِʹyu  A2 ǨU1kQ@+S52xM(3NDIpO]# 9ue'ŚLti3Xf47p̧ܦ 1恙Y2QX vV9#H&&!t:2đ|Q 7:|Mj`:gHu TD2'TXdN+ 8$"0xbuGAܮX2Ta곩#Vº dA3eJ{H_!ev.)K{ƍn4scyDZKܖ7.RD*VfUE|dړK+PzCъR){L.лIĿ|XK:D:6d혲dTA,,A,DZ3rV1RCb"UMynHGŠ ,mD/ʴ6U"bvvBzծG8:/OJ% iE'[Wt7< `> M:ƳwM!⏲q2\䗯 :AsB.)A8?>IT7 I"qK@ g3.;c{˸'< '{)fvljHrtLd۵M ,7HQ[|4ۤ+/gAy#/]E˞^B\d:_֭ 'N,nm:qD/iT^7iv;T._#oΧG^_%f'̩t<'g+峽 GßΧmG~x'8| 2r$PG:]5X=+*,oIG->,OAO.<>=usQ{Muս*!MJmHXi4n Q{|0ꊟk掋ju;v^7~x{ȅ=|_߾yE+ L 0OׇM=`147ZZXgh]g|qy+ƽ>]>֘=5U ) ~:j?}|ztzVzԴaJU뎋B&Zg_i.tsVJK(+޵Q3 YMڨ6x>9Պ8kXwG(\m#H}6R"Jͥ֊K+ti=0Ѡ\p +:!jZ͝IΆ95|E#TK#mz''#XFܱ&񙬑m,VqV$ sN= N(+;}L\3 su/Z?71m(cxK=%j$U -K0<ut5]rc z~_!Ge"R9"sMP5Y("` 7sKϡGMY$WFkt ;2 Is LZIG̩[m Zex9Z&]iOϽbxs=mX{TT8Qs&L׸leLzseMf>[APr.J)&ֱ\: G&5q Sm(>Ť^vSL>dGkL"[TdĮ#h k 8#^pɂ>nGD]4VdFtt>io\8gL\kg`ɪiWގ6#gx߮fQp{Ι!qQRԚGEjŴ~^:m kh &PI ^R{ͳ h֙HU1It$y&XPoQXɈ+cC6X%V:L "g'07>8U!1g^ &c";pȓ3A #-""& *%=$E.J"hsc;cc3w}ݺM\+,e Zħ,wO{!JXpeJ6h8+nCO eΫW[;+[)0`pHogRi]ƣTkVǙUCɏߗ)3Tk)~_|:v뽤:^E}yeCbiZ+ȍjZ B#製%xs7d9|gBiԤRJ/8t` ` \Z9RfJ޸hPVm:)7il:ISB:e[l*AWt5vr-{k1/EH\jVg8~Z/ހ]Z8`곣M > Sķ'HۉHO+c[2?] J6Δx[xR ڄFT$س>d3N mb:2Kr_/g[_hهRgd9*knw>6|thwդRǔ`f(Fe۵_ &)pQԐ̥–+'ݳ3"t)fcg-ۏ{L%ywVz7^o_1~/8[`aV@,&4i2&H͙0:z?_%P:FT\x4*'F37\kmQ]b! _|k鉧s?k#LvjX/֭~'>6wU]θqNbѸfLȲ4ZRv=$"j{孢3eu 0x&7 m]c6޾y"wGo-& ' )'uGs+&tr8`TКF)"3kll^I۩R\j١`S)5aW9"jǀ9!Kb 9:e)ȍ>,^\#JEFG<& %e:C* y"m00ksjӹ#B7zL7bi|GӤ)u,y3rnB4VOj&=|"ifݯƧ&g&Mx&Xg(RS 6@+RcM^Dl[ꄞu~v1PA;:9H.K@R3!J&a CY2 BVeeU`CL 8rK5̉ީM265C 6/^?OO?>*HlՖi 飣J3-I9bfw.rMY޲|"˒3»e%XaG1yiy+87dg[1(_x|kn^Nyg䍟=(食蝏/6a <"K_?æk#>_~auzWz~,9cSҋvfIYe0҂Y^~3^|7胶4}P7HзHǖis;/!/w2tl0C:}~et?uALI4?:>Zi JνGp4.FTK 8_WtM%E3sVsZ2߰@tq,&YpJ/noevF5gY=n>1eL\iȘZ{.EpĸF{ gJ'6ͷ 7_(풏`4~kNHϦv}Yvy#ħ?~F5J0*(o:h9JQ'uQ(xƤ~~:oZm& d m"NFQӞ]n&  KM($u*E9aWRsʨ6b 2Lh%( Xp5\K C{>*X9$ezMj;[XU˭aOՓ-Zܪz5~b'Q/7@1F~?0y5m[4|TAbqV^N]h* Ѥn2pkަ\Gmuz+7c?\?pc]G| *p8ӛ𹊀5a"aVwo ΋7ir߂WV>j3#)GCկWUVT.㳕U;t[ժUʝ)%v^n?MBjD&Lj[+h ytsK`/h̆umQ88l Wtѷvџwfp v/2O3PLݛNmz|S3 Eli`t wi ó ~2\mvBb4Ib2/̣1Ѯs=?~  e\JPE,JP >Is7|J/A&*e7ihJC%*Y Qy>sG\۳,0ᩈޯ u7f$hQY|8ay/j-OLmh2sɢs"o pLH&+~= E{o uJuB],j.Л{9Nکmn7ᩋɓꟻ3A. zW =7fCn}I;R]kP/Խf`੿3 =z7qڡXw2^dOLR i7F[Z1@uiR o]cZ  C1v͚ƻu;N;3{&6n#7b{UB-I΄! \HSBu$16#M0eLyt8n7r:<]aY9賡ȇ3 :LtQKωpy Lhb Ƙ :30X`^#J/q0^adl7%ƫAuWsQJ =Gl!Z#=utnpzWWn>8Pڽ\/fnd`m0xŷ?wۿ]W ;gE%6)K.<WUƬ0~]?G{nmH.bVg뚹LrO3q[C?|ӌvsr/L'x(j>R0#9OQq ][r6OPUD/:rrz粝NOE.u _L8,#Q}X'T!9N|NKyRVu%j uftj=?{V\='C-(׸D7,֣ >j+k aFu-kJֲVƒA_C4@5ĔopOJgJى~kkEdLJ #Ӿ0c0`=zk%%T1umbs쐴kB>A`,1*E#w=g  @X; ZDl ha[0 ^tJoˇtf,xa?쳯7tyօ;қΰsy?nbCEP7`ʜsKy!q{u<)m.0`pO\܇rWK1Ѡ߉ KX-# ɗZrV:Afј]bnk6 mFh/ω+@InwȾ110 2**e+t@HmojRz 3b1Vٕ]oS0iܞwC1FLPtzI-FP.-)ه|*aoo/諦OWe7nnm?Y[uUg䝭-!: M CXaD,Rz4Ȥ`sLE&[%]m50bÐOE=;7^׀qPڒ-Jm|,SHѸkKik6߯{4}7h Zo+gۣ@䔨YZd(qذ40(Y7`by s`X(XȖX?p2}4abQ;ޥ{߻g͟:)LDY $:&8d%tÈ;fq{l$nėr 8M(rZkEw9lVJHH!Q߫|{[ /wevt7h36Z@sc/GL%@>p4M)uі*WLU%"mQC<9ӼbɃLh CH r$0j"J;I3J!0i!D!eeZ30Xk1hFs+%a 1r:VC2v!}>gˍ nJl6[񬭸>~ 7@T1vPgevC*}ս(YC,ҍ2\KZT+&B/' M XAC ФeUAPqVƱv.<- ׸晒a뭩ov3duq3ωg]G=5]_<{b05F.?w_ y>}͵ 7_{޾V15G_}/:[@ | g52/JŭW?.=[:dΎ>o|m2UjMZʄ2a^!X&HSE57V&Xh:vF&x&9h. [,냽.Lc. G۹0$VqfIF9o 5DXfԫ!Kc}QN7z육xjk A",QD Lb |.8ɑ\EHXG"\JlB Oc*5(ZfkYç0Ѧt@ 3*%mpzFaykvӫUWL\I kjRbN<0 K& k%e [s$N9Oq, sC3:=֭U|%6;"E Rr h'Tfi%1  Ay̓ *T73۽MC 6O ^ "cp=U$PCMRPkXALGaÛ;.M_jiF@Qh8U̲H(J!H5`n0enNWz!ˡ 0mikt {j"v^7)괸ka@ڐA#LiQR J1u-3+۹i GZ_%P0 *:mqkrTa9W\vy?{y8(t+Tf,'c$s&q!}H.Qy2_]_BggSQ@>u-U)hr\B׈nQ*& oװN#ͨy1קtQO[k߼Ez"xz=z_?x6k-ÐhB̕N}5TOvfs?];N>ahN1%Ɨt*V63,oCXeu;7Nl_G:V Z긓Z]WNN0m9KUX$uu[UvXk*?WµyۋϾ<|/߽x&w.޾x +70ZEd& `=imiho4UlE|v5j+]CٶY(=/~o]ʈu3}Oҗl2KvI2^g[fFΤqKH t(+hl"_E+g:;Trxbz7\o]OvFwnu:,1a;v8_N2Sa;4Ut@2%aɛzͪ]I䤊IE'/uhãUops/ &~WkPC G._p|_~Τ0 Qf,"S&xZΜ%grrV&f 9oB!gh՞2#,RFXp4+wE &a_zMf]hŝdC筇m +3} ϔhzo?(yYQY4<W(U2xi2$BE_ }i^P{G'NJ9/,Z 1Jp+j"]P\4L7`'XsD1]ko#7+BJ]m>._fvvl&HLd۲H }/K%Y~aew}H(V9Xv4[oN쓧#SKqq3# F_?9LEpD/6p9+Q$ FdBUbG1K'!A`9sۋ|1:#ՌKVwqk y<&Y2}w癇S# sQGxu!u1}ѦEJz]u;kgҮJsv**o+WJeCѝsɍc)6fDJA<^"&{(^KJZ-< oAB -p C6`7#-$&敱Kd S6#ȸ="ى}V|]DvQ;HI@ Y9p6)5EE$hsoc{dc4Ku!UlkyzfPLہFmqmuę8 X*a 6%УEC ` ʗY.I>>'Ƚ_)9$i bLH#Q2i,hb)i-d_vOoó}<:98y9/fHH,]{IVK-w!G+Q0ԨLK2ΐRf<$ˀCԌ1)3N,\dI~g˧h !*cA PӇINzSǿgRF1!sdTC *ZG4^E&Ye'm[MZnC{> &CRؔ &9,\4. p H Նx$澠vٱ/m{Z>At<ƕc4m$gMΖY1D6IZ@Bx6UH!3@*e251IXDS)|IT'}e<6x؊yD}Aj㡈*#Gu3^7UD&KdAFsEbFk<;."׍s$ 6@8)em$!r)472YDRLp6#Gu \6q^gY@\iցt D\{mdLp 0gd 0G mc}jx(*_4kz9=K'U n7+ gnNp3y?ZPwC9hs1[2-72mH^6A h΄Oɩ;AYN7˩t04'$VgnA9/<fYȬׂTIrC8_Wݴby8_eY%SSY=c Cq!3'EhU|RQ$&:87*۱W6jrFELXtw^2{E(W@0'K1꒡rRxc#擐-|LȽuSKE"SJQ:"J>e:C#%E m00ksj kGlVӦ|j楇6>ӴQ䣧C19ζ0[|:"lΚOщDe`\f24n):U>z`+^K( be}I{]1^A]غt'{Wuiw{31- UBkp-#O ǓX@(Ц of4͏t;?<ӭ8CO'cs;cKJIzp}zh%vs:Q9K,FV~)$l]# g;%ogێЁЁNhgѹ,kzn;[WnⶼtjբņpPoڅm0; AfNch:C_~0{?u>e#d@mof<\l\'ɠ[͟xVr} ӳ N?$O#uT#QtV"d >p"|Gx**U<Y2UK^ m!(L7kURy%-U6o>?Ё}%fJ3lGBSG/. gsxRw/R{&&厓fuv[Z/`L,5*$Ի\.ž\vr_RLC#3YP$>sh$1ZSusiҝ-Gs?i#"> # ]w] ڡM1u+S_K \8jNj^iZnhE}xͶCjPyڈ]w{o|xs>=/\?&gחDmVgO'MGWbj/t*3+Bΰ)jVӨY{HV9M}my]Wo'nn F5%f&Dh,n šaQI. }ڽ-ޛ,1ZYˈ"1d+ @A-[@8Aڶʐ6䳌I:s 2qFFݫ gK> + tr| etI7ٻugG6K<) x´6pO)EHN!DdR4W ¨=DͼdEt"y,Bo4ʐ- PKOFN[m8-N ܋^9Swk\+zeLR9;/FKD3ԜVБ>%$Sj6`0} /PGӑ1Rs*C۸]חtfd^6g s.{aOA!pb(_9({ݼ .JDXln>qn>x*rZ'8]d~gӗ)>p.R3 q ų)R2%7#O)t7r#[}6%9肷Z`(]r앐JZb628~0V_6{_aש ݖ  mԽHQmew6p< + ^ kt :E4r7N+ڄ9ٰP@w3Y{%V<: {??et=mu.7gts}tHW7=/6Z#?|0|zߗw nީw{;xO~#O.~n.]Mhmyv&ӥCA|Jm0?J(G˾>@ ʲ6mL.Wd³_.8Xam2=ή0[4,9Vuu _We)k&iw=|kJ4*xxKx[ܕ2uΏ"h]}86u<8;`xFԉ f$.>igJHwG$;Y~6=y?lI[,w[|!u͊;x۫>ׯڥm\f+J> az/_Vzѯr[ytJ|͟N:VIB<~l9M_*h>57AvugW81~iG?ǧ҈ml5~g9bw]ޅ؜"Ow<=%e7 [WΜ+OEe@]0OHmb"ΨMx2;M.oFCey;uZ~Dݍ~桻7u*g mBWh&S !$fY2EL\iHbIꄌ^d TL5TY_>ޱlQ =igXDU픳 ghO7;LACV*Wf 4AƵnvrڭӶ}6P62/{z7R GK?S&dգ['@p<=\Ϊo0,+֠p A㔷uޕdЗ̘v݇c;7L)r^;O f`# $g\KrTj 3_FЗ=&㼅隚wMw65_ᳮ "u1`J;HHJSqH5cRVJ\dp(P[DtNg^;TO#m`z::'.QV[P:6vW-W9_uP!(vCc e & ¼&BP0A*29DLnnмlVi5Y1Tʷ7Ȟ>U" Ʈ]hz7f7Y1s.Fנ}NH]TQW@.CTUJTR٩P]w |J*ɨDJJTjթQ]QɜA Wn\ Vn@MGyJCJɬnK˾mx,<>gNJqU2z bnw{}In77A^´ee$4! kܚLUu]΃.zkBC9XIݲSg(ggsއJniU -R?(uq`y2MLT`aY\'d4{wWϹF(e;dx< rvK6n6mYVD-I3u%f{IL9sUi&\v?S.^f XgO6M&LAXx>p gYQ{}R55?`} H.8TewqѰba],SkiNH;shXyѰaEʣLh CH r$S+${V(*!D!eeZ30S﵌FMFS2l=k{)N3zS7D6MFp|6iچ>KpG媅tc΄k3c ?h<-H]ҝ-WJ$!Ou$ [592ோ m4Njtm댽q54޹n[;32c慑לݶ|RGøij|Mm+O\s=k6 >r޾|?r>CɅ[?o7|gX9dϳ6d 1"gޢrr9@{|w{ﮰ2FFb> ;o#8sX|zDŽh4}ňv}Uddrڨr; P&cðZf6}SYپs?-M8N9Rwkx*K#ya P*NcFBhm 4}et*mT 5 T!ӲFn ^9N{Aϝ|u6qOQγ|R .EzOtY#rFEhȀBΜvF1JȯЏGG$4Љ_Oi %`F*`! #J!R`Y)"i7o.1ny?nN]bq=۫#_,qʼz*ԛ!Z0(sF:\DS`xAY%>Rp 4A4B՞GT OPomƜHcAY8$ 8z,֚RJPV,}g2DOLP@^JJ8'% sH0ظ$Z#gbNp J'ݛ=bFtg+uS`C 7<Eq?gVb fkIG&uji=-a[Wm:gri'翆w<ؓoW'ϲ_X/o0 y6~)>{ne:'Xsxs,{Nw w'E3Lw&Ɍ)β_Je)FϣY*th6^8oQ ?\a+TL7UEyS6eԧ%_TlSA3( B]Ҁќ W˅*P~x7)\?,AY"׎ ~+RẐr4?\+RRd !] iY# t(?H7ٔ07Ա\NȖ5-W 5m>qwJK԰fc_/]Aލv|)2}ls*͏[1^Y#4]T.|yr`X~.H%_r?i[SATvh lX*@QQv2zdf'u3{=07{VhdiweODw^rƫ >)y'_L0AET3$ TZUꥁ3L"LnqE 1E 鬅_9|cyz߾ p} N;lxdBs1L +XSz(ā.'aZnD>Dmt?u5XF{ow/Qz7.;|} ^Wp􊣔Ҧ'uҏ-UP5w$J4׾[HaeHLǘ+t fGJu҇ɗfkvx)aѥmYDL>uy.vpljז\:%`)\h!orrY(ݿkVR& 1A}EQE57V&'9NxQ >^INS۟}Jo H. QH{Iן_|UH9QQF2ԩkO/{p^5`&", yq %#/@GMfExsg=ZKmFs;LSdL n KIJDέow~{g5p%d@Z$I¹uZ]V Ze9k`<3T&Pyei1j~}EܠyQ٬ ҄kb8on=},>Ί2P/2`Gyܘ݌f<ȱ]^.ڲ!QKcIyo>tNW9C&8 >`nOwLz۽;l6}DUuKU}g~g zIge9#LRÇ&r ޕ-qc_E,"jY3rt[㰬Q05"誢,>q).UIVz\pp.pq1< '@ .(TGN4DŽQB9a32$N;LgHxbZh[UQ Fbp射.CQ|2t[Pͪ=T~\V?6oeVEqf+(r\5Ob/ggJB #,_+33lYIljc>h%rť2͟.JYMQRy[xu>;_x.̮Y9_t2t\[nVvmrG?ϪEv'qِ|99Mٰp1>ȣqOzuǣŇg\=E!7庩1XU4E l>9t=NEC:Ÿ^$ѿ'*-JlĆvg޼ۣWoQ^p 9+& vne_֚75ȚkOֲR?=5"ozd+cvcpՁ8Xt(0bu]-uը0 LףyDz=d8!NZt `zE%4TǼ1]fP$hlekV貚xDFbH%5R 52Q, qƂ"Yi8ZMTGZRBo#=dÂʪ鵖ohJ8o%@s$eJo eV1I1N#k:Xr[n뚧+;(E~W oU~ bqE 9L'qc̩/#Еyp>O BJXZF OL3eb9̩c.$6ImOzfk̝NA| }l(>bnRxFLOD$ T n$6I˱RIg,}qqn-E^ߏ㨘syq!C$Ƞ(.2WMU_͇My]"D6wyLz<Zdo 0f/5I)ĺlPtWF-${'28ګr lk$r_Π*PJtd1vs[{rШ%-ggr2%\|2ʉc:`=pΓdż@MKF\MbRm: jk{dP?@.RhWyt_ i ʊJc.(]Զth_aŗM@m5qMF;\Coa! Tgi)rD6Zeehi!Y˵pAb"hq0;c$:'-*ɭӮY? ނZ$qȟA;Dj8CrD `_5H`lI,Ghy=k[޶3?;z^9v]d* F>J9jeo5&\q,!Z^:l))a Fn';o((Q +)MTQ)0^|dqdQkD!'N d~k1X/y%6 çO2BmJ9'W%1hEأ˱c \kԜN)}{swG&~pdmgi6ve8uUD8mlrQZa5RlmrVubV^Mr~=>.u(+#v-5>7/qB(ӀB% qx)xMВ &J!䙼Zjw)k 8lњ$R*&$18֘Vi krn UυʅKDl77ÂL/~_?Pړ'g$2DJHT ƐlBfE D&HR"(0\'R 3dcFsL\*!f&&10LœW饎n< k&f[X=k =[cqnhr CDќU)i4x^H 5q$!b|T;-(ʐZўG\"Z,2% ǘ%օУQ acpʨ_ebl #6>eD0#{FsSJ@cGЀE1@ yϝd8Gc&Ct"5ՠIHNMr,eqlJFI{(njsUsiV%/BGy[CtXM@!P#Xڤ! ,4y;o=8jMmDCϋmZǶ|-rJHpsZӋ,Vc䝳XﰫcFQy; ~ܓ#'K QG?TvT >D?\K;?AA}W~;Nߟ'CS<O.vcs28ĝ R32I8"˹.zꪟUr?W}/jƒNx:?/d6"h+J{77?[xG^~Ǎ18RU0gU^K/+Sԉ"EZ-jwfѴ&AݏEg|bs9gF_ˇ[cq1>,˓b.5ϭ^st3t hj;]et p]!` 3tR ]e2J=]}3tEYt@Z흮X2gz\NKWCFOJWC)D>hOWV=B4]!`JTg*å+t2v(t SX2`n:CW.UF+[2Jzzt8 XU;2Zzu(!]a3[g<xA}#~5x2kZ@kT?(a4YD7-t7jBsO5~0nV|zԬˀ^cϊk98,H3.0^,76"[^Lv}OeEcm]L_VK}Hu^WVT݊U }kz^$rdAmuqA1ˋ!냮SvQ٧ݏスG >P=a?:Gi׉ǎIK p,[Kr+w`,98MQ~ߩd\ pe5;p\Jq%m5NV4 TWW1 {3 87 7W1F#v\A.%R2Q\? x&wrWk~p\J/~7rwz)ڗ S.Ln<񞼋&sT•[+CO༟WıWς+SݽHe\џ 1sNH3Ӹ+e׎+sh&\0O1rI•Mqb?o:G\.og,=w۲#"Mpޢ&ɭ ,=3I},=;~x~ȾMNVdéHyb.Yl{W~8qW}ʙ4eO {]"6D"v)(6Es}ؒ('ZRaet+qu)UxRP) .u>R$gc%7 T0R6\!si"\Ln JզSl҆3U$N&• N1i\Z+kǕt=U爫DO+q;!W4V 8XˑNEbqL D.Sx] Wzz/Dv\SeLڎOpMpemt)N+fJ&7 [MkǕ\Նo+(%WYq\`f]R1m:C\cNa"\AyA)6׮TۊUsk?O}|ŐsʡGn~éO&mOx8Ղu!ڙpTp(Rfq6ʭ(:G\ f+d,f\\g=tT޺|p%Lt >sLؚ*pu.̈́+f;}YrVkǕn*pp iprYfRI6\np8aK?׳W//_?>?>~{o+ѻ!G.y*|4pv!׏ ܅)S׀Hs//Gov${wA B4yarIlFQ;fL,ՔTCl(<o/={7 >{ųZ~uyq_}/Ws?_= ]o?GE(%8w.1vw[v}]>_$N2tMe{LeX_;M+'|h\M/=Wk?p\ʵz!;RMӸ+N֎+UyJW8N+1JRW֭Tkۉv7(M+ JFZ;Te WIܭ+{#DO=IY\c5a+?PsaYZ\bSbs 5t5:Gf6SRZ^J[mvYrbW!D:>FWPkyT{pMp%B_JyTbf~Riqť0S1 Js_]] u爫$!0M+qW+f\Z֎+U҆ ><1؟&eөoZ$כL-hjJ2w*lSC iprWPɯWҹ Wg+\W*r/x\ALJ?W톫3ĕWvLp r=MS Zg֎+Uy{wkӣD*:ϯ}uPWO bA+4zؿK#kE_?}9/e?|qqY\.ohKj9w/vo^^!Xn(zC`nrߴ8ˆ}0❼g׹xOw䈁w1ww|)}'>5Yo>}߻p7oG1S܀ѻ<ʿ ߏ_'o܊We7?[Xyr`3pE ~}ik~ ^= %zW}G O}<[Գ8bYG]˞-JLT\1SG1~;}o^Oo%Y_Gj/ ?y~}u@]^^jrvd$%qnSR6.p S61d S)`o ׽>?"ɖ\bw".\36 +fksv81-zx[=ړtg/4HxP[c4Isݡ Xg)$|S"M6Qbdwh騁NQ^@TN5t%ZDB;7J U28ؖkC'ptap) 7l (RaMz7b'Ss38:Ekk1]ƀj! 1ŚP*" P[ %#BR8{h$dCiВi|P,rG͋"&9RE5Än$'b*ܜYH@ApԞ%C"XdGhϝPi٥ LdA,)TXCbbY r30أY["kaFx^vÎ#ꊴqr0p>д+uA[8a[Fhcn RXLĽe=* qk{vFX7a;R#ec`T\b$l@6lmbg̡m+"ropP7e. 'a^P|QŢ. } +  I"912e(q,{GhKŲ`](^b dAaY#e"ܭHw 3xGxsbQQ,TGַ>XżpaQMZuY 1H?,&//OΞ]oѼȓ%Jq؂٧K6"Zwp$$ uԆHTFj PK_J` ?PR"hc2ψExXKGDJrFE0lzdž@LG7XX}Vve5m1zFrX`\w^y]Bt}b e7/._Lם9`Խ޺4sٸړ l|S>P v6xY:Z;5WZsLdG(3!e,hQ?=h|QmF٦FQpPaRD) X[N!F -|.=hqGr5w=Ud*2J`*(Hȶ,= n1`=gzT}zZ {qTh0Ͽ[ЊbDqSjurp=,;z.Fa50è#W(ղHtIL 8nIzL: Xژ]koG+&Tcv6f7$z5M*$Xn,%J6 V?OuU:NHYA9zVcVXZ#eYH 1N{=PpJ>9. ~3)%@LT LG)I1?],ɡs-8jo:k /u;JxpT k 6P 2ﴁ̌p-+!С _= kFД@F`~cp^hv{ xk2C)pD ,qJ^qTVo_…HZ6;!$pUU3\h懛 pmyY xyl:Y 9M{B o݈J_ >T*,T@''k8n|ր؁PrYYq"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r}N H ]  wJ N @gHr@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 ޥGN $*'}l_Ý@jrHp 9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@ݾF@B=5t\!zw (5#'>:s̓@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 ?NS뽱N N5\7' ߩ/z[F8.Vo x!U,Pg7z'isª0=r/7M2tey8 $-w8>a7ܸt17eMu$釦Lk@_O'*H۵6`L'@jCϧr^P֕T@Wo;%k1@/|=sڂ6?$~[v˖6&Ł! V Wuh_<+36fbG*+p~ h]UF3ϐCaO!\BWVԎj*Yݧ+YQWVԒj#B?ke!\BkeJLj)DOwEWשЕo|3]+DWCWEoѾ3VpKWۡUyj;vte[\[\ {ɾt(%#C{e{DW0f|o :u^]!JNjJ]`M0h:]!JKtt6+M_ Zyu(&GlU'(pp7fӒ +f=1?C_ye1yPJ9XAϣv2/:;鸤w$E"3?^ɌߘJEXhXVk Tc}*)eT!꜄RA%k}HޅGLHUEL|]WRYԵ)[SU-dV*9'.x@g (e/Fk M^wG{8%vvsY>Y.ϚڮZڲ|m:oOY=?>cuvw a7R(% 㸔1>vW?DM QnS!V2קI \7tp ]!ZyB>ҕiR ]!ZNWrc=+? b]+Di`ЕcѻCj+J?7]m?z[}oP-綠+GtuߢN3{DW}n?VhJs o*F] s]+@$}+t=+l\o ]+@'CR^;zDWkR튎+l ]!\BW9#Jiw,zE[j;晿m=S0J:t剮[K%\ [BN7]+@!Cw] nQ+\_ jNWtm'+ɽ^z~k ]!e;^ҕ꼫fsd <6oͿ/}Ce "n_|iVt\̎n:QaE>$|(7~Ͳ/ɨ;{"PKܜ.vNO2Em/\uc=Mr ǫNpd~IXՃM}9r:/yJ_WٺʸQ Jußm)= 6WXװ7'wSya ;p+iYCZy5Me3W1oS|^N 0},yuuEq={ԩo-~ &^N&Ftxt/$D։%\디qJ cTIL\(LJ G~q:̀IüTMIuY8m>xVbVZM!`=bPYKqiuF";J2>3X.eIبi|W'(o:٪P7-iΰUU7N&/,o Օ7S 2Uax4C!=|K/euʣF&&ILKm%EL^A`˹2d.YRX4Y8jmv,phP2:T .(.&PA΀^ m B-;-d틖%(^5 6 Dp&85KbQC@ЛjWtO*,hbs[NXupŒXP{Qru6Dom@ByvT MQjcbۘgo xU$bЊ@)rE 4\֠4ɢ4512̺FP5܋Cn!ȔC~!۫6=:=˴OA?]EО\sSӛAc4|4C( 7AΖnrt4 5=ZۡC}|8pU(+<@]TB<*.JtQ|pR*hVN+Fm^?{(f̎zh_Ѻ|Խ.>8g׽񇍠j=ܩMd1(`Z15NA@;(< <1! X%)z sYRL:r 1y.xT`gpnWVROK#\ӗd|%-GgyK$v 0;/>ծq(ѭNv?kJHiUb7(p22Wz7/JC]tkHԑ\M&Fn*d&gkRRgx)*EQL%j`UPG|V{Gz/,׎9{Wڧ:HT!Kp6 w$˓vV>:: ζN*;ՎB P%%ҵ*@ X&2[/05ވD N23jww玣rRu2~8tT'FGm>vrgu=O*.\}|v%nW]%C3.9sˊ^A[9Y3Sbl4VBX$wܪZfCCOkА0:h#hEUI:(q#Zu-GY[džGonl2p>9>8Q᷈D^7ݽ_1K-nbW]_*kVgĭ Z*AVQP΄3ŭ4>+mX4XAHktkn1)^R]LL, '*Mn'8qzkߡZukr}7 Ë?weC4QfXR{Yas Zuv%,K. NR}%㖹iFEn6/ylsT/t[o)qn,(&OGM1cqQL QULW8[i "欫)yov4c1m']P$b^Vr68(/wJ7 lvol67|}G< }-zYLVSlxxYJ3jO$!,/LƄHe8ē\xJ<80ksTIEX*5g?GDGlGżks؆f^r ,_fw@1brQ)/Q۰`=vTEih |*Ь8#ZT;jN(O:Li|z^NрTɤIG(9VKʅFr$CRRVu-V+D; &80P:G`Ta'0,֜}vBvxȁ*>ζ`Gz%t1&.jIe"'Pb# . "N#GeiJ͉"ڗ>S9j|3ks -E.ƶ=&=}fo\]Nl!487{4 +YD\.ũ5x>8\l ^v囇;伍aZewmOMYOio{xEBO8kAKx)#HT`G0? ^ZBl0U !R|^{-TH`cĭAd@pJ#c4W),X,ԅP XxR,nQ*3-;닋 wN7?EڎFf#v`2DJ@h QAslBjE D*HR"(nNVfcbJs L*!QSTBmGIȍLɓWVGGl7.f}Am=`wQc# A PΪ47ŽNx/A҆62MG(&!N ਐGYT5!j/8"FօУxX9K_ }Abq,"ˆhDq3S%Xe&HO31Y'B( ^2ĊTAIH⨍ L  2θ|QbQ{JrŬQn\98LG|qէbVr$.r( 8Vg h#`M M:B95 yC!1€ža18ia<sNg uHiQgI.Tu&"ru(>!␶RC  ;"E8/Ns?}zeO4x2[θmv4(Bb@b pDȶܙqAg|,} CxHH!OJjXH` n@CWA&Mnsm-v`t񹶴 [Gʓ5!e $9*ՊRID$  Sx&yg+ !p)q v6~#=3)GT@26FfCPCJ0vfuk|X<1r(ϑԌ`0~'mI {Ҫ;x:[o^/#_/F_8ua'N '?~Q󟓫g9Z7Mz)^6HًggV(ެ-릆Ncm/`*3qnYgGFwi~-W}}8v Mh\j;cvaRlϪ?OBuLƶ u @nڰs +~˫go7ꆳ.ʄ,6rc٫qƷ{q+pxXMF7^dw3jZW1w}w;XucWMX:a/?HHo_&_{,ufo؋Uy-')^ԽtwzbW,1eskrgڹC2bn[xРf}~u:j[R"@>yoqĭʐm=I bSuwʏhiۻn z;3C4;lԣ Βfq%t?wu "ryv<U7-j2˽cq2wl]7;怒.z[QzGd=5\Ek#/Cj8(}ҦMJP5Hf%֘; W#]OS`D u)iaeY]o&yvx2q[.DK-UT::Zr 8;.J1V/!{8ἀSxk hԌ YXKVK3'\+W/%\7c|Ynȯۺij!$(=( &(挄(k5"ʭ2{(9ώh)<{|YR6psL,9u"Y˴dm,$Θ ,F#umu]Nejӂݵ 697>>+..VIP?jD=Bl^V!8߽=o{wx~eq0}-; { նμޕ]Umhx?*xˮ j-^ִa]PCҁR=JqjzQȔIf[#kcCMS>ѐ|iU~ ,k`Y{YnH-fόtNz"dL ?dt-K|4(VPrĊj͐bW;%8nVz̍'\JLZiDI(Yv9N^b Lu0. MZ,PƄH©Id xp`>QJ"ZsŚs6 itE[;qO'J3)bDi/ZN-< 牨”_dZ?|Z½L;'W}щ4H/`>4М/4{{{ M`-`bSDQ-4rf6^1! Zњ )(T1HT46&ǁs84&g F8`K#'w7e[gf4n>r?oieuf"L[ݼf?aXv_)ϥE;i9˾ٮVީp˚WJd:?'׆gWʁYu4_]zǶ P^&>'*A?67 Z^[î݉[wO{DEY5NI  TA}=*14IiP[:jzϥ $˻) UV r~G Hc:W.!3K?mA9NyP'qZ/@[eڱ$AZO"mbX67 *1ldyo5 [ThA)$C^2*yb b?7`g(țkv| qrܬ)RrS)K#V+^L(Vv4ߜUV ۽gaկzQͺR/8v5TEU3f^jɘ"|n Blv_/Fm? &UcM>\p꧸.Xc#Oޯs#>KY@nۿL\ у@Is.`WڐUq3aUb:ſՖe7Zbp⋎se r+~Z|Jo[7j}]͞T~ZU7bbeo-$@&9lA{Vut|J Ad(N-G4RgvȚj u&ITBa4+pdGN'Z%kimh9?;vUâw;pʓ~gK[:C[ߴ~.~r6(@WŽ1.-mDR$HJj0͌N帳 828%x3z%H!u9TqY?"gBSRW8sx* :T␂!/֜u5h{n:+snnɅqJ'w|>9N:U?{7r @(>nd`<$W[Xihdg翟jΌ.FWJXԃ4"9"U7nm҅t&tKґ؄V )QNԻBEo9s}]6t˂_SgY˭xW <WOTb%#gywsQwW;TK#]feA.ljYoM05\JJʒt,bѱyE{Ӈ3T!,,ݿ ihz6$>T(V IDD&{U i/X+T(YYM4#ݧ5ps޲_ќT<(߷ dJ JjJpֵtyR?-~NC-^ vvG:}k6r n0"|jVMu~ߟ.J:>"Q>Ѩvm ״Yg)٨(Tڴee_Ow)zUh J~!61?ő^W_ֺKQۋjZ}tcyI?F25RJs- VX r^5Fl&Mf'u6+'7cYS(r-Nt)PtP֡*L&;^8sNgT8ךv_]t[tw=[hP7ٖN~h7bu@#W?tp(_˸]^{X_e0ʭؕM? H_&`9V srÃN=Evq-˵N \bI-\-`隭_b Ы?,ȆІܷEP0#C^\p${qOoVp.n\`9?#\ZV1fmjuWyI j3o \IYkY9E`\=\ +1氫f.M+4&zpeWb1puîdWJK+p7`4~cઙkqSYgWJ7Ջ+f:GVFW&Raڨm+\S'N-[{:x/  R->ﳭ浴#6j\Yzny5* @h˂,|X9TL)a* 0\2- `GT@NN,J>SIiRe&805c,'9q!qM6T&!S֪R$3*& lf3d3N{d OF2b3HP=Z3dmYAl sV!b!3 Wl2D=|8}T.:6$PѢA ("!rKicFtI\N>6";*Ի#e~߶=~:MO:x:>xu0hJ6C?Mqdw "zehgݎ€>tEi@S肶إH2AD07((NuDאIsuUP-D:6VهcG]pS4xVTؒMQ|/),H.gQWE]xSUEWhpPhZUy\IU\'lI)dԺZ 3eX sWBc E/n2V 3UD`mJW4U 'YYFcSJ%X^1+-{`ѡFDŽ p9Noײv"} AV Нi.<3)0(h Ws;_ob_S>u8ubB o? Ѿ=W(> =(:+_k'H*D(CR.b(#9d͹*M69S!tQ+ZꯓPX<"sv"H3c^(E&5s3F9)y5)T廗ߙtT}_mnw2_J%zbCvrTϻRߘ۝KV~p3#$2ó/rOYw{aV:s/RE(qLDt!(י**(W'8S㞢-h}w̙M ȩ,l=hɗu+ |RRk0t#յ| P@(I(=[0j0%-rIQha掃sQ ;<:dD@ݑ{gEӴ6~]ӰʇS?a(w4D׌2$gU1A|s9lV|tފ, VFhP{֥֖>őqJN:$3fN|ʺr}y9NYw?s04rIdG&D]7u_%[+\/jE"ÿND7-(9z( PQRqwT('cɱ0X?gd ּ㣓ÃuPҠǷ^6w>yܯo=r gjz`kE.ߔ-RXG׎KUgp-cP X!Rz(LeSN9a85kŚQ{IODPZd Z#* 542f- Uz}B?abEڎ?7fff}پ9d_}tao?{oqsP5aJl Tur2Tcjvl MlX`:4aWvj< =197#v@(`^(cxcgmI ϵ,wJ5aɐ  I)5bo7> MQm:ft)Yt~tB<-Oq#?9 5h\ wː-h! }emȆd}дȦ.mPᮓ,[6LrewG?{6WCKXճ5yaf;fNoVn5\hZA*cIan fjYmtƭTO ?uyGILrJ͙"<CaL8Qki e(k+ڬv=䙁ͶGZO `"-]׻]wkrh]i.6wzR 0^;D9?g7wY[: ݇y6e-ݻ4z^4y]hZnևhZzzt{ey֝o؛p=*Rsɱئ7M7wl}S~1Ǣg}Ts}'tw ]w>-\>]d 1"gޢrr9@{\+:zG<N\1!7c$Mo1b[߽r>x2x_ qk֥hv xss3٩w $b8尋Xk89*U6qsq(FHzNcFBhm 4et)mTLBB;A ,P׼$}Y:N{l מ=MzbJgDgT=I\̟Ku$-^>Ip_oX^(<6z_,uN-Ew2ZAɑ!gN\sg%FV{SaaK]K@A s"crD(XkJIj(Y٩aE{&CDD/䥤Dp4bPyn mo[Hߗ)*-'K6T~yCv_~憇h9(5PƴEq(I dy@|ʹ,i O7g4^|nG1N[&,~-JK M(8!MaFzӉIpݸa;<}`xĖٯ(Rc^[Px#C4{|gm3 $u_2v$l)as4myP ̲C:kC4 ڶ7EoUj V.}EJW:]b8ξޘC* 7Im84I'UB؋OGD@7 WEZ RIٰfwSl0ehk]A->Jp|?FdG1$Y=RF5BWy2#%<)/߀Gv"p3 _De:\N|ںciOB&iSZ&h'šA{Ll9S'Cs#)N'N%g? Etg9g+_L0AET3$   AJC$^Wо9kWgNA{ h} N;lxdBs1L +X7z.^eVȣ(ݙnY{Ze}1."~@}\kh*朵k*D1k!i%<XHĨ:=V)l=N;[B7zf!% 3 ^1L]+|Lqz?<87Pm~V˗7>e8MfٛBЏ ,rx|cz3h]0R ǫOW%WUl q),s:OqzzV>ne]A_]JbgISR YY 'zcAxy)#^1w =ZYK0X`"k""p TQ͍)"mQ'(߅$\\30seMwaȅR+ٹ }Z#(vD؏f:ԫ!scؤwTz/F}fh?jqm c$REHL$J@Y$Gs4Hn ap) *7!vVHf#U.xxPyf{0!Zh]oK89lx06.|u}ԩ`~L ]o/[# 5Qq>i6uS|V:j]jLBW;č&* ~eA<~<H12NG dSDNB J1u',õޞS0'aZS  N[lB\:$UX/Ĕ0w^#AVXƏ# Lu=ُW6ʈT-Dn"(\pE'S+ޭ}B$om~}09_toɳl٬y Uٕ7ՅW70 I\"z1K?LKS'wB-#!kGb|HmÐahfYYކDQ*4o]|T o.cM7xmɶQLAЬ40hR׉cd^fQߩ::˙/AǷ߽w߷û7篾{{:o^)جOD"07}hEk2 a\q/S]97߃DYWvPt9vSH/Z,}-W o_)uf%lٳ.URq]ʄ(r^@s;/@ tƽMoV'H>sT""e7GÐ)fՆI҄ׄU`J }64&/V^;/g3-3X#gҩU%w$%: gqL46Kw:9sĞz1\o^?9CmNܹs)g v&mRºOi r4 *3 =@bҹVpb#V@º]SHNx~_5ߚ=-=y`$ B($<3`MLj9ԧ4Xpӎ۠*Y6pT$V/$߽շ=ٳ*FLd=}Z5=ƴ{#TLq8$q絈+׻^cƹJ/!ę ;rN;|IiM{&|M$' KpI7*UڐE_S7|Ɣoo"L>)\&Xc~p=q3f&.8?ä]9 y|`t md{A#eμ4-M/gӻƻrl~!)Y-dc2Xoy{9\*;cYrVS_^;yM^ֱ~PFE Jҡ֗~ ?Wɖ XY5OVwSVIy]`mIEdgu<;$~T۷^US 7{7[%؄ ANXBXD+t,"FWWĕYsD"F+kD,"W1z>I9 ]GW于mJVj~ Z F\kz+ P H.7vwN#+}L˼Ia_(6axQS':pi/R_"bڝ!c<0 ^52}Mo3BI3d#N9Ġ2C姤Ιhh?c[m+{KqTDu:e($ZGQ&Wf)EU *I=>b4@s1}szrT[C/)LF]lT4O29?dcͯIHmHV4HB`Axz[짰zwq'S ͼuo3!.MT-2pG^(wjMa).qdb/ ;(6˯Ph#Wý/ֈΰ:'uh:'I.X:'I-}$BXIxΓg.GAH4`S!p̋)Sif-*8.xVoPJK-t8- {l"mU+mv+-:cXM3_nuO uA7sR+lTgj\<]|dv*a&TxX[;[vrH $#][QWsl ;j`X*?^2Vo].O5))Et60 9BNSTZ4SB:@=ȜќeXQG!j#[vgOȍZSW " HHr}oJƶf9bڡkOw)ł+RzM*ǮAb[WAD"F+X,"p\J.F\ WIeDBBijU<\Zc+R%VWvW于qJVSթ \#5==8`)X4"Ƃ+Rk" 88bH Jb|q%8<\`m4B Hp}vq%<&\`O:\Z+RjRY51xLYyP` n51m rĖ]yq~2ajk8{m3N/7 ~;2>RLߣ Re 0i# v2.g-g ՂP}[#+ UQZ  b彟D*q5@\Y)B!#<.gR+{?BF* q5<+'<\9E44K1*q|p%^̘̩qJ0ZeU; WĈ}MRKPb2\ܮOSlT 5jʨ+W$WB,"JWr+W$ذhpErDpj}cTr9jF+]oSN->"|+:ɾ§`H 4N'w.FinyϏ7i.Y WӔ)l<*!ƣZקXx OOigN;.QځmF;?뷥\Kیbl mF+,##2$Rz+RG\ WF&"\`ãɵ:w\J`#+S&w<\`#ɵL2GT8`r2Scg^ŀ%\Z U F\=\Ɇ3fNpI:n Rkv*6 JqplDCE+ 2\>p\JF\ W(cMDBVhpErU4նF#+AW{;&\rm,"W2>j W$$NĂ+T+;HeFG\= S:sdÅe¶XbDtR+5[{wV7@Z5'+P[JTDQ v񌐡\Z}2H!FZ.XD"N +Rň8=8`M4"]nV2w\J#+f1 `ɵ:\ZLqE* JĄ+F W$C,"ARiWՌ9'WwwJbS+:Z;NYWj+5j_spEW(WC4"+RV!+ G+ &\Z!+R)݈J2#W$}zdF۩3jJ8߁42VzxviX,'K>Us[e.Ctx,g[ zwv{P+;㠝+'87[ZxO: ж 6BG6#]kNmWSial m˭q% V7alq99NDb9WY?Vc|/Oޝzޢ/kːajOor%Toà/g#Ţ^\Z%X%rFV͓Eϐ[9?y5I]z9Dp'8Xu!5*lu4V&QDqQ)q{EGyMn-IRxD$DIgɊHir]K9} ذfn6`_З/fɛ"짰zwQfAc.zt^$~{J9Nbf@9"jZ@TT$WA,MKRkXqE*툫!ʂUD+!h&U%j "gO`xBOat,\xpEj+R؈g+z8W=8?`ͻdvj:YJݳ9}M\lfJ2\\cWO+.2"\`Ǣʵ]\STmA'XmDB ɕDWVپTZ6jB9St%fJD+kP-t~JG\ WJj%b•a"\/+PV 8 kkAUVm^]Ik^&?׫MiErWهNAܛpNjPE/"**BUu .) ]~KOO_4\'"\agpbβhA +L%J?Tw >x;of+wb]vwV<ºy쾺m.6%rJM?+-U_50U7wl+Sle3Q!r,kg̗ٲqy ޸G͟y֔s-t#ެ!N0ی<U+_i>ZdӏrƦ(>+=O]\f^l@[{9OANj)6@3 m= \aWƧiγ\B9°\lv`we6(.0 ݙkYK Cs.7Ho<Kd3L!|Z,0n>Z=[fS .$F)@αUI!GkB8 rs5 :}c0s ]tp;|T2:V+}y9| E.)4CabP$\HDqLQMPA+g47rX,OUQLd[0%gӁ︰ _ \35EL|X fQs#Co&X)XonAg`}ј C+'vJj'+Mu )@c9( `l<0F3Z9.c;-*.n`1^!W塯*Ꜳ_f2Y cѵlL8bA,C֣Ҁ".Z!()mɋ3ʠrY`l!3f; a`t\Xy*lmXc3fV;!JzrL(B,%PtSlBVeSA"kmȲЧLڪCX$c|Mԫc2)"}mR$lubutc$fvխsSFȌȆv2գh\F1XBPP&׀R߄LLSJt"T]Ae)Φ` AuL+j+1w\ՠ6AwVrE`!S zkf a( VH(@Lh=4F[(2·Pz[sV z4XYu0wD `L ӌ/%RBbYLu>A% Z4 ǭ`=*t/ őXiA{ d*3[!(Q(ʨqGA*x3jYߣ|AIPS!HvPW.{/QZ#qޣ.PVK kQ4f^n!GAU3_Mr^KYRDāj`hV."2Hb0۽@VQ=j+P"5Ht@gm# %]FŌUDb֛H1&@T1yQa:I !L&mY; lיx+WQG5t1ujZ XA]- H mZI|txK Kq@xc#XYd :AJrIJ%UA 0 CMǃˌ,$tF^8g Z Tx" D&jZ5`A>xmBV5$ Uxƛ Lqg^qH+@(|u* Àe+Mg@2)/$rƇJFD9T5]!=uFAXc(6tqHU  JCzul) ʽ.H!b!C1 `VRpIt EWKlAH_ m# 358D'e(}ԋz/rypx6HΈt+)cjPew|xS|.Mw-Mߟ|41zfR.Aή/攞]/M/ym|n 0 Aoj溍ًj^S3?,4RN.ϥXQKasowa '| ][ޤ9q8z66𱁏 |lc>6𱁏 |lc>6𱁏 |lc>6𱁏 |lc>6𱁏 |lc>6=Yt!nB> (\bh>Dd@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; NF)m兀QL hxN M @0@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; t@7%z~|GM鬇o~.YY^-0Xp0%㒕P01.!\&c\Boa(Ul\~3h{c&W̎k(;=E\ Npv2pEɰ++22zpNM0B \QN(ZኢԞ UtZ?!Bױf*pEGԂ՟=?Gpv} W炫ҋGƮGgԡ.T \QAN(h;\!J/"+4nBpE+ 7کUR>v(WlzBpEQJ9p \Qwኢ •̈́q2pEN]|bT,"\Bj\z~N0UZԼ|;Œ $^^-3tƉ01KrDšOfkDz^VOmݽQɼ:G>֗8&+o/qqZ~GlZ>wB`c^/.j노Δc`׿':w&ET N}-}B_YZV2Pƺda%oʒs׫:E+!d[4' }n8EU(mlLնwMƒ(*(~࡯oڗףr GZ^J01ET^Wwjhr*DVリj]vePa5rnYwGˆ!f_#ofRgggݷ_~1\7V$9dI|Gτ:3LgBZ wUԄ%pd.Q*C?=p9X bBOCڮ]X lmz׽6d~ fW_,}j3= ^w:bK#lbY,v9$.V6hzV7tunnGnҪδH:=o'=a|(w`^λi5D[VU,JVU IՌNE,^P+$(|{98vK J7Ý7C|FBlZmf-8իhKs uozMoڵs <|woy ]/;p^.Wu.WףV ۬Qu~y6zW*|M.~v o m,ښf$7M g`C;=?mytbuGc75Hy7qd#m޿ۺFn|KbryK{ןozoޕw'{s _Od{ -Wo~LjWG5ݑ+5Ok7c={hZ_x7\Z*庨Nb("v1N/O_#"33)w?9/nζ틗ۚsBtY᠏ݡދ#g'04ۧ݌ץX͝qny*/Ϸصf7one.~A7'|S hL͢VNT!bUvgj CJץS{kQRs 4 m|)J)Ӵm߬W[ň}p/s*1w}zq9 i'4.]_n9$}]f,fŝ3؄,V:Uds7k-rUړQ[ط-.-3Dir bL25mRhіReCQoIz-N@4 -gy֟`|(g_oS)y\WL2~zY!4Ek{zOcEt桑j ayN顗l_zup!n {b@l88ޫM\ӱ6>2w*ɥ $S޵q>i& ShjSJJ {goGQѢ K-u;3q)~Og6s+5cAfP Y,pI^rH4gcȱ{,O!11/1hgI B7\ <1,C@Bs sD)Eih52oZe @qc2<|B`1hS Mfc7chKnw9eQ/KB.%jUL)p+AtS rO;:qĜS}6%9cD*1t4{%2h-C629~B5_;o Uh@6CCMAmͮ,mI`⅀vA 8o,?t3{#s>IL.NxQf4_mzG-쫿|Ny>>ͼGh?럯uՆToɎ7GH})'V 6}a|reŋ٥Fͽ/gߎ./>M: OipSvH]Izӣ:7-.b)7'O*z;Iy&RxۼY%v8{zu$jY}H'mbʎ&{xUHt7^wzkW1ZwM;7Ԣvr9˛^tR;M}9Xo 4g.olw=NU-;nPw_Hf \G}~f ʥwyi杛4z7=koTBj_ZأR{a[/$Jܔ%V{SY:ݲ =Y%V[De+QeBӆ{4,7Im fBΨM܎x=6i ͖ gf],`GNKR-\gjLjs9ЩVBd1p2hThH%#Im&ǐ:n;LOq|~ z8SrkgcŔ̶6%/x݉9$7Ň'vil%A+n+ ëOW9tH[넯»j\;7UJiФ^FhHy_eD *X+J(̮bьy6d"fNer0]ƻR%-O}'irNZ &'~ 1jf&&Ҭ3o[`[\{ҰwE ,^n1ն^o۴5M~n5|70l0Pu$~pD1Xi1Qjp,A˦9.%6yQ i@mDiR)N?=EhG _3BsUn\AG}8jAªաe݋6t]虁ea!'BZZ,m#yU9a h_ҨPQ+a$|坧ׁyb=bI]2У$/12F*I@?AdQRF ð y1F9wIЌz(BJD"J bΐK X6dὡцMsxt\ͅQvظ |T@i~Ͼ5t/Zҳ^v牠72-ffvVz>QYPytsey*(N<(k;سw3؛i|"3hJJ[ɺe!g/Og[V nJCJ7Wj.%n?;GTqm{R79DZX `(D)4٧@=gcx='yO}9{|'`vvv*0 ?4a -]C8eFz2˥2]/( %ڏ웉"xjrkoɤ>hTRq\;޷Ok෇ngӕW0^ʬb:!@ )DQ% m߄b+m5(qg+3} e w)q!X`h@z\U1 иy"|et4q̹]q{%WfpcxOns!V-'n婼%ix-+=k:.mE:BDK.k˽ B CJ"ChSzu͑(:}PM3ߒE10FXa l,"pn"#BHZB1c}JL` ; \>5nމmB<~Ҋ$PEL}DO1.$|3 hTi^*T,,^,Qh- rV!$:BjvIe sF"1xmgiCL=>CRdLe2Z=z N)l҅It7w{lǴq9q`|I']ޯ''M0ҧS(/';G(d7u'MVta1QKp-)~g\2qtɎNpt7G+bGi,9SzcɻP{e36Ӎ7>-'( \.\)GtK7KW3EWsEG/B\T> [!gY#)@1_(4zQem̛ʚ{&MT?O|W?xy}u620;`ưd0g%Hox1${u߁?֎jHMÈanfNO E_ 7/6-T㨂muȦQ֪;42ϋQ(]ZW1?⽋%۩6:t&5?<ǿ߼wߔ_|o^¾}{I/WijRaz~?DځE#"RуȂ+ɱm4ԑ◩^ P\ wYVDmNg;Jڙ1}87;ې~] ˰E/Cێ=bL!ڽňs_ ,b6L>V, A:i <*RWC 0Xe I]留zIyYsqI<@#2epȓ!&霹,d\+I(.'3HSpCyFTX'oކiCI4ݶ-$|6A=?ǵмuwt0U0[=Z@y>&*d+^T1]1KޡO%~`9H0Ĉ;$ 5)@lyt&{B`ōϐ4qU"Οx:mĹ?YG\#]7"bJ_t)ۥFf]HwV'x :c oli0jr,JlH!tŮ7XPP*V)X( CmgR .yҳR 4%5t4@"g9d-+ÎV"ZRetG3tzy1臛f_" Xst"dV,eB`!JyPD93U4AW{w!HG)C))*LJ&Z[jy6>΢ C[Htt> tg}zۣMĜ2n u o)"nOYA@Թ"EUyT-~YLW]aҘxD !%=A6`DEmL^eS*/0] L$R@Mk_Ap+5hkԙHsxjw7=h h#h\Wp*ǥ$+k3>;TvLj3hN ;9F%!iaZpzY4.ElBiXUJQJɢ*!xSr!e13CAr- LtY [phh*hF((|L@ V5I'ǟ%>an 2vH3{ׄgr~8 i z}V; $v]A14a 3LW5}^=yh!B7ɒ*ME'-4P1K-(Z诓˜+_cq*EUa=ERȱ3&,Ӎaq ~Z݁i*죏+mo\gtDYlCQF 8* @VUgp4G>#DPEbv G#!bc>XUˊGSTO<=ք;s|Q,xJnڈ&H* iT.*s؆:EȦY=;o]oЬf$ [ /ȟ3\AWXWoxІ|H*QlQ7_Qۊ"RtԞVMcg27F&Q4aG>7joUj'9ǰ*@Wyw|~[t,gQr?q{~VMK{I)3Q2H~eK?ٸf^׈^pދ5ck9V?}ɹYWnmy/.:˒7VOs/>7|9XϜ?=ZK?i?]MlByrk::hcOOEm6u .}rzJ9 n҇@[E=8C0I!BTq&ϓ$*1qB L$R[jP̭oV`d2 9ƄTvRӕ$R|-U`ő/%G&bhmԬC)ZwJ7qnYr˞R^tݻvާ@)(˵✬ڷ=N"56Q㌥TcaUJ;󠼍ڊT פ۫\dש`#q`0rtNqK >SE ݏGa,';4:nzr:˨GʝCtGg[ĵ~ꚹN ^}WMq/7e2fWE(zmYTuFx *QJBaʉ":ҷtϲ96e2QCB$NL.W!:gbp%[mMК0TqR=c7qnV i]}!wn %jv!-@c"O˴3_ag߸.`WbL(mY2(;k3˦܆FBebm8 +t|vUb_}Aokޙ%;Ks*^vLkg{4DcQiJ?b9L\ID hR[ $oJօ4 CdQXt2ʼnE$d8!*vTBgMH71N#vӏzD##:7Pj j T!֔L}>RIȚ^>}kڷr %T֡pBhi#pj3u*YvpJ¤ Nc ڻO7qnQȣڸ%wع'/~.T(`('چP9+0蚔`SH] 7yӣvBgxǍpaYa_Yy77Fwb|3Wicr-dIPvagdv%YIЙOnR:a6xKJ6 p/w%wդe3uw%RRzJC |fzWUO]U3zJZpWs7j|e8l?#Qܨ#Xlm)׾<>,&4vh"bMH~x׿AX_t~ikm+q:!>=/˷''u枯ZCQ5WO?ڣZeEc0)?D~@- ޻eM`6;ڊZ 9hjۤ$7rEXRr!{kU_.j̫+ej^Y?[Cy8Р3旈i+G^|m5;E`7ŝqMZn_מ,/ow^u0["p ,4ww;y`-^j͘V6ڍջ\y bZ<6^mH Wm d%=ΤTl}⳺<`H}}-28-ؿ]mmNrjǃ7OMRp\ro? |/v48mw$z:paԴ)>jh: *d <^3![~jvO&7cB-?MO̶SsCTs 2ʢgA|1Vt;@6D}[x9oJhD|68d@S֪eVΨFșY sY[7qn5<9>uM$W OӶ1֛7vmźSi{շ}@CLETnϹVQAɚU|5xT)ku"UTؒ]mE( ӂzXj6䩡1Tl. `W 5K@sˎ_BRc)%/dJ xYxKA>Beբ:ନ-`)6/i6.tB]h,M9Is IZ\)P=*eBtMK;ըC#Ӳ1x6գ/:{MHc)@6TbEs.+P`61&1ɂ:؄l촤j۶g=mMkVM0h.~J]\Nt :]40mhh\HpvAÀ," h]ђ J\Kb~MZh%[dJkTqQ9/qMg8)d(Jb%%T!H,ʑ+Fs*L`!Hk:yDl4IU\caOm5Z̜(+Mj shb?β5l|¥,)³̕1EUB Q*" 2Cc f1<و+['0A|ђѢUЌPP !y6u |h vz6Mu~;tH3u9049E=0y(= *O!>g\ :<!CHT0JjBBw % ~G/,I)ۨbdP }"ْ>Wل\*L$:)E ? R>[Yh&ASN E{%͠s4!:!q\yºKHO.Ӄֳeivk}v ݸ8Б&vBwAL[+an4)=%ϰكv's[0Z4_a>sF~HPq!e̵3{(AZF N({e܏!S F! 8/5Nhte]W#T]MNplPCXāpl{&;ol-Y٤8|gM[nƐYliBLtO p_W~fW& r}s8:Rq$KrNb>ǜ5u#+ݦoJ;ƦѕΠZ*<`sH ɀb#Q KNj@t6nlk]E0sKRV;6Ǐ>|Kİi}k5vo[PڌKzoD^y(_=:|8o-PZXK0C48yU<噷O3.rxUM26bg">,.h6)AR!C,h܃s{%e6j鐜pdj2'ًOQ9 H@!d:I#cAεW|N6'GyP kpI,dG. 7.> v߬L!7d^#1HqTpf 'z!,(%LΥl $I鲃$AU)'x(䯉9Ԧ144G,֜[;ދ-%-N:7RX^`}546O y|t+QLd>Ojj?췉v?3GcZP%k3Rm$Icd߱u"$(:i@\%!qc"Ô0y<0!5%(^O1ĨEYb21C34gprP:BD t<j-Hc,k }B!y<?y8K+Hǣ$>j-iQ^fh.D*"ز[Q rT$Q"~gIq> B3.1Ԝ Nkb|sPIs i)g&Pb,'{XXg%%b,yx%Z/7;Ys6ގMC{þ(* q U> p%^ $*Qr-"# %(B,vFB+yFZL>RHDo:$p,ؘ>cPTqKiN2 U5GqbXXld싅0  W-6gfag}qkz6;QtПξqĎLq@8*0fr: V*¬ $IZ-dnuvC{60ceJ{ rA 1(@%kSzTsv# Ú<.6:MzfKpwMP G}ݩA7I$m(j#7  Y6iJ[22C*PE{5bM Yd+$Qx=ꨭ-ŚQux(Xl|싈PCxbT+d(Vyd ,"5+8GF%eOJB@mҔ#`) PJFE5{TLhn5%hPŚs㈯!.Nָi(Ext&K@E Yd!L0"XJhwz(08CmDCw_Ya Kd<:=&ArFf͡1X|pHDP: x>ewwKwI}ۈ<(8Ÿ>v>;^Qz{(Qə#'`NZ/ıʍe=hr +䌳"BF. 5%IR^qFJky7`Ǔh6[{՝&( YWtʲ*K8"x}Uoޑ2c=.#mrNqbQ8X Rslg;H}p:rMٶyi٪'츚^RxA޼vkaSG4i.W>>()wJ@5!! ZdSht]fR:$ҽ>}=xx,]A.}3?Ǐ)spEhP .><` ǂ<$ MR1BBwy,tg|,JCxH(i 8 951ZtH ԡ[ˣ!l8SW~L6m۫iCnhޭxhi5xHN2G1 J$+f4cqT+Hj. "Fm뀧 &AF[C#<8Nq6$~PCr1b#4M!(TyJ0S0rp<>ib\ F1$A X8D@e/7{O'-ωO7Ejfo?JLv5V ĂoB|gn ]oӑgߺl >R=J `xIeB"9$TRh4Hd\i(Ee9vI,$S%i%h!=;?ڱ>=zO|m꥝Ez*!L>PAb4ػXgq"G#yĚ\o͵s=|+ =[iJqSEvb6&\k?OFڥeیX^s_GRHCXQBj ,p%_W_5f^(Tn_W@ypncnzu~ UUsuEF^[v>ָ!6beѪx4oWvlH6il:xRJwvz/ĶXvGClo=|ڡk_[|_[l_ߪlsF/73[QO)fr~ awkl?7ΙՏY P>_Z׫O˽1hōѤcϣqE?EA^kZ Mm4!9_ tNG,?iޫ7BnFf>'!uRxG&Ÿ-zЫ~O2LoG'{sߐvox| 8p@q ,Jls 3D;MAH1G)5ባ{UVC 595F<5Tq, yb#1(J//՜jpL 3;/\\X]Ӈ؊NmKM,mi"iHm|jEŊorgYgj&hxK0jw£ߧ8)G"O=+y®hre lyNJ>/kؿZݶvgZiȹƊ[ǖ_i\Ж5O 3{[G[x_x!D@mx: m^$.{/:y-{xoEX KߤsgKͷ6 cբ^{yiZޭ|2.?Pʷ4S*, ܺvU8|r6G\T-]GL+Fg.k]=m R*^Գx ho9subՂP0+#0ae7hlD&tI*6'*NQO|%,JEk1EpHOEL*tTȬ3nA\_Z6_|&9u2? qV3,y{v z8Nnp{Ylzwpt:ܰ~qv;G~mS.7=̏ٵ-ykQ]^NY8YՀZA]3.MN~%(YS(Gf͙C +WC"5‚T mgBG}<ځåU埫cYɲt]"]`Hd1Ĵirrdz<¦`I-Cy\b-H1)q;/Nyt (_K>?E._d}!<"E<,lx7Ȝ;/.>p- si=IRIPDB,=#: scZif%* Q9}!.!9E|#m},O&crňNt!`G}b$gްQ|rup"x͂KSL鲎wY4 DP y 10K{73I]/:3sLM)D=AspuنBdkYJ)~vLy{U߷cێ>t^ '*Ⱥ)jq{ʁ%1Zreά0JuIʓ}EPLO#Ɂ{ƔeU@S008G,L>dEMƥ@8\zg=^ʾ>1]v>n:2i*I*P^SeB uHAKA*膒 S>i\J95=8؀gVTL\ZMÛr,bjltoBqL|(ғy[B'f7[Axgn't$. !Fr&/ x@=rWdpIZ`/99_N(~K>=P1pZPAgmfcQk4u,=@ZCs/u' 4~xκ9d MyF<I8 E裥);6+zzOn>x7=6ΘK|/?Mvnrd0 *^[dlIMh5|eyg)iŧTϲAO<<^8KkluF.uY[%x&eR=HXhJEs=.T*_!ryί:?POS_ޝw45aVGLl<3 xT_c󪍨U57ZܤjUߠ^j6yI`%n n;Ew$=ۙFQZGiT@0Ҹ}ӨI<А=-⑰}kJel!ebr#v%S> n/hmb vʑ})RH B) pUg&DLEƠ:ԬlrA LBcX{'f״>NW`.jHң˂+U&jItltR8J\UQ~ _rMgĖ^uޟ5z΃F +ӥF|B,v2HSfCa /?LO:h`n00xm> :秾i!qI& Ia*,$mĘO -%FOIr<"㾁45,Q_:&-|gw TG%,Xbf,\ߗc?B3ӗaLbdK̴+š0Ӣ^XUL_!3R+2Xy0pU̵P \p 9`X+>*V*++Ωb>o_`я>߱e2ݞwDjʈ[;-ޣa(AvYZܼ1?yx" ᄑ}QYIԌff]O,>+{n*B+P\U1CbWJzzp%v@pE#;}\Cb-7WJa{zpe$hv?^m\%]l ~Sv?w&)j)sN]^ Sj0XF>x DdzݧS9GTݛg4WŎbs1gU+ա`sV}_'6+i$ݺ&m 3;) ~Q[k, dY̍N fEf2 Ajxϒղ,N_,aAS$nr/.|OnO?6$/)XϊvalِMƷ?b1ǏߤgB؝[ز~o%1L5 S2ߞqMe|J)`|-/hm#J5yc]KL;C J),ziw!"_o#-ʪM@c*Ĉ:e!`4܂Z .5,L̃@CpWNoR͜MǣJVuMS8&Pu6݁EAG7_~m ֪ˀ|Sy{ɂ>˔җG7GrR唷(Tytl۷wqti_?9`كAӢk.D zbOķϗ9i 8/d(=5TREEˮKdt#%1MYnd҄oN}dɤJ2i#xNf)6AYrRJ+`;m'ЊD{sQH"BHu),Y[TƬe,k(~7kȵcj欏 ޣh[E%!fu q;X|[OCGbLiCni.0G@zBb& Km/"y4ڂ-Pd)AӧdFsuRrċ(ن$ ?g 姘h(+j.0kLQe\3g>t2}u/dKɃ֜wx=mh>x}˙o.?aݵ5%>>\{ɺI稼'VTQV:$rp$)4&k>FAWEFet䕔ʘL0Rl=.E3:%D j#c5s#cXVmTBcN΢ҶO%2^>خl}Ndßи/'8b'eJa#`DE*bK\ehɮR1"g{.xMt PD:cf6U9PQ˜G8G1yǾQ`Wy< /HM:gCMc0YM02U,v H "C@bA$MX1H Tdș1S? Dv2V3g=~׭ b5meD="O9VYg" 0!oZ/cD [(d%JȔFIAg|Pɤ@dB &Q CQЪ`YW 'M׹ilXYY=.vwP@0 OᛤN6QD`aQ Fn;1`Yh0f%bQ=܁4 l@cti%!"Eњ'Q(LZfLY1LUnwȓ3"Ey*0jDfr\]͜ A x9zC#3q6 jeOhlz^n  lWw]R ~fr%̉ (XJZ. t&Yq6>MN\%u*bgP?g}x<]%;8lέ"Os)\+zfBRh#7W#9fۥQ4a;wdI4/;5ںAYj.=gP7k%CH1R(q\eC@D]vPڧ{}qqW'?^][͠>|,7ˍd[e/_`b,UP`p:D2 XcrF|NQ >Y\;O`)P,AyV?{Wƭ"_.е" I{9E~hn䕫AdJtĖ9 !3))I zkr JӾOG H]lՔ[Z;fP-Aity>YCXXdš`(c`"8A$D < x43L/|zZUOy4ͣFL-RYCd@h%q&7&&Enj͓qˇN:qȚK}HN)1jQ@*EEISb/IR^ *h]G!mD um vb{^B'n*m rއn6>0Ab {ZwDHg/v/f*m(~< Ɠf1Y|n&Lw~@`e2wRIr%ΑTP%E(y{RCK6l6Czx910y4AOZ"p4ȾMHb|\~6Ib[ΆsOɾ.>8:'O-> WsBA_5Rʐ W9_]AUH;ӕ֘_Mf ;p?6oZ]jn꣸ީ-w+p,nc&cH{9DӿgJM|?6fd L\>/lˤ~0W<=OO ?xSCl`ք]OM@ K.y%1oN mn8UZݯ̭p}rs7BvH +frAJ|D'`4)Ӭk8.R# ȥ" ךZ9g-X Q vHZ5#c,1*E#w[^GC*2‭iU"bK(F[ :aA t98Wz0Cnb,o}urQ9:uZfк `*кbzXxU`x9o=KyVg Jx Ns*^g(gu^Ó|30CxCOD0at1eM3yBy@Q)o䢢L ^*?,$g*B?T!g>q Ҫ2TJIhSu8׭7{I^LWso([@W醹b\ujc,4%?V_Լ4{wIQwd/^\T9e^u5Iܘ]g˓+_漀av{0"r1j;ѭ# $g\Kr19t.,Y0Ja0J#68R@'g)OKab1VVKT;_X_~o/K3 h>\"BۓI~&sG۫bh9Hsgl̋ԼQݽ[aŒ\0ԒX#o2Z[z,HbJ><uRPE#)6gF\tmw`5G}8j-#-taUQVl׃[>MZ^l{HHK Y_=Z$o^MIFby sR 5&/O+L.ecBu ᤤ3g17#EMpC~O\ʱ2psL(QmGMȰ,A[R.k%'RHԤ3rBBg!mxs{v&h;J[J`}:>av>ƲU&oqӫ.aºhsoY*Ҝx8{ӧizZ˄f^h1a<ю`$ G&b+A[9PDMO+C0!B*oYX1c:^ˈiDk45[!-#1F^ HM[L'1zӄb0Az;ܫ\rv|zWT[LYf4ݬ»7=Vm6,NHZ1Q__!Nᓀ bjSyT;kwey'ֹ]nvirA-/)l{{S߁E+4&AM|GAZu嬿5ßWݞ&[FX\~,#Ap|K[oo1:mPs5g:R eH!Xv:Oz;ToVgs9ۋ>0}}gh?j1m:%,oP=^#:>쌜m6=BLt'>Q2ha;o-ļx]lحOEۛmy-+-,:&w;By`&.@LJ H&&EHˡs*HHYE`)Ηx"ֲ= EwD@4:&NF03,*͸Jb,A"A$GA}b'p!1}la)9!ѫA;X\O lT$'i֧hQFwٳyr/KYvJtP6 YE r&@ 80)1rrT0zAV rKY@H ^Og =IETخM89;Wƅd߅i5Qڂiʚ>M ]o; 탛vllꮲ_j?=,ঋ)$t9Vʹh)|l8E]v+F)S+0҆ "a:9RL݉)piG'I)VÀP& -,I֓+1e r5LНUYPT 0/#K"|vN߇TJQ{>iBM\Q@>S-r}B$,6 `#Ln`Ta]2vVWO[kgES]*z6hXfwaHJ |q4<꛹]ɊqQ`-_\b1PoxUR1qV{tHbaK'\/ dZ[RGI+9MXv=v{㦝'=j{RHiIe}4iABwN;_LlI'mZHbʕ&aer)ϝUJiDGifZl_!}dez"ɑW)tvizfwVyٙJj{@ K3#wpRU 4$t'PEe#3\3 ؠ!WQ:$},2`|Re]K`s().pIUŒcN-L %C.lM 2MZG BvdVU!D[*ULLm* =e0r>A&OX#^&DtR,8T%ưllP#21c0]fr>[QG(&yVoQZb 8Mo};1~l'X6[OZ&Gx8wUB;k=猏Vv!ubl .9HxT910W}zwW6lUP! .A)`0d9T L,чυҾ(t@SP@Ex*EsIy1Q F'Ug WsxfCˆ}.h[ns"V8gBG򃣣9v3æ;9y C~H@$ ]E%U#FMg"D4QšBÎAІ\$G|PK*QT}M9Ck8l=39CL{.'Zo|c.+kRQ?aXGAظC:j:)p5FhDC6c5.3/XRu>7 #l#bh%ep6FT1끙x0rng5ZVܸcSփ_wAD ZTŋfݯ>`msv}U9m !A F /BQ+Mn B@;$/H\51⪑F}ASI\}ʈU WB0SERUŕPiaBW_BFŕJ\5r^j:=vqըjW_n3xvpx>Es:mqޚmvF;z:3!e)w?0ԅ޶5ZPrԱYkl'|3[;89w$=_; FCNSᄑӐzˉSo.@[f^o{7z>JhݳF엟_3;=>;?)ʦ S~:Eoj.].f_?3@jT%S\!bɲN,\Tک# `F8d[ -&c+\ !W&ocb[sb8 P\Q+oyYsY"fNx)V7H*Qvp\iML;_ XdɦO=ٴ5LzfS+ݖ K%4gl bVq3h9qT#`.IeOTIExP.%10aTi&h'EpҢ_l6B8diXFOjyDrrS5 ˃Pa,b2grmc;A[{ 7Y[h^R۔>r)?jf&T~ו9hivFQ$@@;'w(wbQ;3G;0tVAhPT%%35zepM..7 ]z޶TgwR7ȅ&?vߕg'"D]۬=;:Tک&.Z NQW.Mcz}iEO(o/`T65-޲9m'?zM`vO'(v0[.sd6?z?7̿E-++~mup1q/شՍ^\yptpvŝO>>"fuծQqpR'{?僓}َo?w?|~?o_}[}^܁3Yw$ape鿼KćXZ[-mﲴ7򒷬{} \ޘ=k31of?՜;Xw[P[n&,z{![h%STSVъ|Ľ! k90q>؟[+>Ym.Y7GgH4x6 V9 iJ5"%SH!(h#VDNϓΆzc7lkFm¨ 6sY)9Vbb씇0t,hlOG=7etP˰j}JQ+FGy<%kju^z{bKR{O]k岘}<8۟mfܺlSo)ywi/~k8`zFA#Zj0rL`sC&P2:ι_6pC|*f|v&5Qtf Mw?Uen~q4w W;lm/B.|:^k{': :5?\N˞Bf N.0zKo rK#mcy ?AgzhFW08vGbI2A\Gv w*] r}ܜm3m3loi]YJ %3:B"k|H 'Q+IP;v K8Z:"Cʬ f$jV#vʓ~r|x:8I}ZnϹV 4*SBfV8(r>\}5s'N\>5%EyT.:U}tqd3{dhd;AGJAF%Ix2Sʊj-t"[Ra!DcAղ1;@)7_M1]8dbvKJlr \$@qI[hb+. KtՉFc#_j;yA[ΠL@Hq;4q7Q`]e;Tr"f,y܀/uQ;n&*6ZÙUDNADK ~bɛruZC )S 'U&pSXI*b1uXW^qP*lM$1a4Dr1RA­peO!CΉT(9Ur3`|.LPyGX/l[J;2Uk 1,[%F+[E)$Ԉ"?L2r| V9:pIFf}(59kVc:NNL|rI3| AV8.l~ 1L9G죕dbN?g|׵ cT(vɡERb н\YA޹_5C2)dXEmHlDZA%uÖ%2`b  (?Z^;  }6)/"&Jȹ|*Fds zC89WO<@yԭ;}h[nar3#/hM#3ͦK/fVP}MRi8͆s'rm\"."%U@s\(0e2JpAІ\$G|PK*QT}M9Ck8ltZ[$ڣXDPj1s>Ԯ*K$S1 [렁Mȹc4܉YKAͻ[M]VV~i[TOp(K9&cQ6lE%P(813N \AhDC6j\fF_*b -^7gFll#I,F׈*f=0FLF]*ݗwl`JԻ[x[ןD>Zŋfݯ>`.,;-\m{]z(pǿvkDovXѱs W%X)qh]U:o㝭*sA/r Bk@vu:x〘km=5[wا3@t MJB!U[jbDY8ڦ6.i6heR@&y X^MbmtW- n%b/%\o'y'W솏|?V7pS `䘬]@](;k$RT'+qoƏ;f:uE[Sl S`RRhtAO;f%6fؘMZZ[\m֐URV6MISƈs;F ]#BK>Zl.L]Kyj`[lsy|L &oG?]sx4;dK$LF^_~G@"Rry(O,p )X k#d?]Y8x #1Gq/ xV}Y8.Nc"ECABl+(F@9p$Q"~kIq> TYf\rcPSs28K}i~l eTb,'{XXe%%b,/M5UZ z7/w7ĭ侺[elKMYϖS{ͽ]x-;v8'K8!D%JE6K %(B,vFB'y9ˤ2&{*$7H8lY1(T J#c1qvH>Rb3c_,4PXxP,(m~f{{ٗo}EOl#vd`H U1C lRfe D&I2h$[$g#9YMGP3:nG $& P#yڔ(%vvQXQ1O͎SAmӣv`7l s*ϱ;dP}›$R6Tj#7  Y6iJ[5ddTyԈ5!"3^!qD סG:jk a1qa/m20 """FDGM3" *< ,"uVp2$M@r{bT u(Kg\`dTThGM ͭ£ Z:l1qv08qqV:͒=qQ'+H=.nl h$KTYL RK)bM 7[/Fg!ptk!+{^4®=\l/=ri`';l0hw0d() 9ZӇ ;Z "YGjj! .Qz}C;x; ŸNH7{בAzF"!2G)5Q+$ Pq4R੉u`aDJ*v$Si]J¹/p|k<_Nm= BkfN s8 {G6ggIbzz$*V x|]sej-jKlmVd[iOD4%8Np]y9%:%4$Rs]-,lHudfSLoG=8]ZU]=YVyӎͶgZbZACQ`$FjѕĈ(-g#f)#ĈH Yyg"i-#yQ",Vx15BjgΎ:kOI)+{RJxyK)OaTXݪfl&oZN #8zN(bOܵMz3s2n.a_t΋`HH{M8&T0vࠠI'%a9͐3 '5k\rbs9ǜ?՘N(AA0yh ij(MS3$El >g; m7}Hף33vRu ^u9W!76f,]=m:^קSt,]ct(zlKe6SmͦB,Ycɥn{5^y{NC+%7χx&7me/l+RqrV:p*3΂ Ry魱PJg&EKfLz-.I鰟پY:{ZA<5M:B\W4K h5mVd3FLDDĢc-<HJw0H}rzzrpxhC]zVd6A g URᐐWB-FlQB_){zs)3r} n=u[`a,Y|49Ӂ_\#}?X+?x>==\[Xbz!L5=29&k )B$I͘bd$ u. "Fm끧 CZ[C#<8Nq6$~PCr1b#4|!:'Ro6[7DajQ&dhlrpXـ&x8_V]:[ +jC~l,j-ĶT\ $hJ>佊Z"6K[AF_l5X~ w5`|aä]yy\6`vKv?N][|][l]u0ClzXp?_p]/y~W`p/ǥYkYk?mx;#lL?Ogv[0˨7;i>.иn_? F P>=쾥&+M6vYe֙gYWnF%eC^w7*gz·YR*Zo ~bkvvx< JKJ't|Ws^\2 7فe:Xczjƻv‘zj X62`)BG+2a֘l1MڸY/;FXVS(f0 Q?tΉLg-3P ʈ) hBRȨ9*0'XbJS}vZVE|{VE 9?d>yomoMg}d Z nFͽJXL >:%쏦ˀ^>׳8<֪=/67noa4v?@0/?8X x$jQ"Tq뤓uxI9Y4׆[.rj(3Iy;ҊSs^j:?Y<`BYS0)CŤqAZ\ RpoM(1FOA?Ai]B o" s$FfXv'ak$l r [ [_`Vc!Bd,&],-S+RIUXUWUVéUrT{:\"F*"Wq#IĮ'I}\zjR!B3ЕRpc]+j08uB))=\@b@9+X,.J9uB)=\@ 42+ \'O,^ \ 0U+Hgή*KEW/Sn|0I3̎v^+XQTG#4. h<WCy9 9Lcvypp}tR %O'vx]v@ϵ(C9&pS52w`2wrzF^X?N^UOUk~8W CВz"'FSfe&|ښ5zdsF ԜsW Rm (7wB|iKj׺q9UoVEoX/YgOWIO7U6^Pm]Xg㘯a9vwuwl)t57jjkc_u|E}rmn~>=Lb />^z5OS ȍ ([7PK4mW2莫? >?GcvGVj3R&WL-mSyA\{mv9D~%\+0\ejuOq*~]%Jc]2(VZqW6T~%* IN 9Qgr2'LlDwU ~wɕ LASe՟WN$/:Jp.u}Wǩgq*V Wz-+2*2+S ĕW q%\?-8y\A-mW UР•  LUpejnWrk2z\JdYW&8epu\ʭ-Ipe7g˧[NxM-,NNUm'3bN*9PtA(G4[Oa!8ܨ S?Af*^]ɐD 8:K8&*2gSÎ ĕ&{^W&7UpejeK82K8+(l`䒣UpejiŠ~*aM.z*\\]fԦͻ+S' \}|29w[}/qr%Wǩ=Gǩ;;WYWd_WVx2[;3IpsTZ W&qrS\WPqe*Iv\] Bx!\perubf:2;qu{1mfZu;662_sN|7O~7x8j8o7B:nd3#AǤ|XF1g_8>~ڜ^@%kQ3v;)yPBLo{xB6!/iiLZR}1D+|C*v2^Vu\ʭ·IpYh!\!ͼ 6%u\A IpI2t YW&WqW6o2(sJ8QNޟWGw%pϴvu&p%GJv\=v{ n\@2oW2 ĕѹ?'r j[T;.W#B2gN8YVU0+SOz\Opee\\]]s=~8ywW+,{Qn]ߋ,DŽo&,=s" dOvDzˀH )L?Dza*2.eHe!\ɕ LmaJV4uΐ\G6l~ T}q>Jpup9,S ngLWpe:2y3dPhŠ;.W;}_~QL|Oo:#Jg|_^_[ASKi7ޔ>/w} xhڷwu{euw˽@W?u: !7}ƾ׭/zڈ槻7%^w7Ǐ7?qoջ"XT Y#iwώ{y+ʛ^|r >'wÔw}pwg?`<+?#?og?YA/#c^$|6{?[m7ϙϙ)0>b!^BC3G?SNz=[((U?+ȶ⣼yiVl4I D꾒\TP8%F%0~'{ƿ$q@|;o/!?; ×+4G\B͹LSzfUfu5|HwL=G}~U2\U ]HJwm>[~؉.=_`(l>ܧ;qA3HuL.c}U,oŗ*iȣ70:T9ju2"'j6W~P,c LN$S N*q͜1'Z^x@,}IBɌa !1gl76?gptLܸl'U쵯&3gIh2uE@\sT;)ڎbǩAQ.&ON 0/HPIƬ9ΖVA%6.iᑒY~/Fbv W7ץ6%Mo1 )`+(VMFWM4T㥡jE}LchrȠL1! [[QA]/Vĥ.ۊb,@T<5vNDv#˱ v&3Uʖ5ʛv~X ޹7z|a $f ja&ap1å6#J2WkEL%h[UaŰ' 'aQQ|IŢ Fy0z-Dm(2:#YAy e|X,+%jQt_5Q !݆f`q'28KNOda\?wyK~d*]tہjƨFcCtDa>vy߾üKȓm5FifgK6"ywpx uԆHTN[]%/5M 0fk`)`1Eg"#l=iST< Z1M3Jw[wܫUyP,ef俓 8/i!,œ}h czXbȃWt!{y5 XnNӾ{q\ʤDѱ F=*xº}(BYG/.ay0.#ojDŨq[KvL~R, Hyq]F7fY|l &% xKderKP//(72f8(5tR"˵2"{hTp=gTQ2}Y:z<Ny ؈Yϊh4AvB+G>'1a,;E.&mau0èjg(ɍ &F+|K䇝3J=dnHp"T[ wmY~۴T`v`b8ԫ#4ly6dQ%Jd7FdԹ]u Z`5&'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN rx@ dAdz@׸j@}a;pY&'1:@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN#r}^[{n5RwņoO@S)y:wK&'P5so\rKߍq>a.Lv=3]m Bk]mrm=[Е%zjs縯8>֚C+Ntut%y Z+ =dՕMDWCWңa"BZTCWרZ Zwt(=#:B)dBuh:tBJ]!]Qmg`.Yi81&#|5_pB\/B8]9* MFM^oU%XBσh<UqYI2kd/XvxhջڦWdi~M?z,V;Omx s<3!U53m&i[SxP}oNSi4^|>GHMMUmַRFor+BP>eԲ/۲䪑}V(rVMrN]<k[Iu,Z=z ڠimVSk^K&T龖p2O?&JOtut%$++y= S ]!ڗ^x>]!J͈Lפ0W ]!\j+D/lRy#+p1"B^TCWjAD o>.Lt*tf]y np4`[2=Z֓ealGKk=`mMnzߍڢCBĿfe;/8\vw M-ʭ4{m:ύ2fw/!5xb3Uu<tUB|#JK,95+tVf&ݗ'f;!zr7TDWXzA!\k+Dk~ #]y%Q*CtЕ`kzNR=ߖ\ksKr?(a+OWDWOiz"BpalOh":FFJ++T5tpb Q=^1Bي KV]!\k+D^]!JÉq +ɫ+ NW8#+8*{Ro &'Iߓڦr?I p'NXcreru;]/8Xyp~ϱ"&aΖ>|; pwG.< s?`yX>T 44O.G2h{7{Oˁ< ގ 0|-yS.M'po2+p.{W; (oWɭkiAWkV8}zݡםiu+$sLy7Z:r=]3?kYY@O"\!%^1LK875MςЦ_x A:yb*f/Xφ,sRu&{E<_YDz,4|}|@T\v0La8;<lwgem2Oap81z3H3L'wj^F0\.@|[쮰 ${np1;Yl?ៗR?<8<"@N%Z/tQZMHfAD.q#8ybUE:)[@[kK UXG:;"sՎ/)渂"=Eh;,ǗDk7"62ܞ mFc)J>5m,a(<8\ia:XL8/m4x:}lM( +Ќ҂::ܷ+3AXJ *rrD7E[\7DMJ$Dr| q:"|2A/?AfcΤl_>zq8;iK:1 Jl,,q ^r|qQd+eneF0'FxFƇD X3 X<†dK‚06'R LhƱhwJ ܪegk+'Rp0:\@c#  axH6H 09pM"ؤSP{`vwqxxfDfq{ 5P%\]&A~=/L&|DPUTW9WtdE]j xi?&pYa^LOK&JџI4oÿ˞_%0{ 9.;i#qT} d\sD2hut8xuw%{e(gB-Ή)^Ls Z\H{P;‡_|Srk8}.5Zn:Mf?M)}g%C잁UGMd ^/Nlx'; Y^kWrF l< Fu|:q7-onݮc>n<䀹dbOO[՚pҖ˪'\_}Q*eշI}5 L>vetW'ӽꮷU*};دAD:Cmy2}7*3ow !;Ϫ +O *Tc}v;N*DEΥ!e@%xQsn=9[+$0$f'0rk%y>r0pPgTѡ@E9en#om9rKɾ0t,׮zud, g pM;f㓧벏}̢Uӻ{r>' 5lj8Gn2IM)9Kd`@+z |/F㸈~HG%3/# q-%ԏ?I1VcWYr@TzcɻP{e36ݽRxvn(q48G8$So҇B>*{BUqh-X΢)Wߺ4R]xM#_QkZ=X^BkQ8c}/ ۋf \̩{?Jkd:Y pNN^v˛;NgXLƙ@=i4yY'&N|."NV7z~5tt@5*YW4kӵ*!KP=xL|5= e:9̓"LQ&T j?rxFD߿Ϗ {/|NtIfxд^o[>b/Ome[S|mm^o1/6/a,Ʈ筝Q5IY?FPX>r5lI$uQ9^ qw*%( !V i6ꍗLFh }7?7/G;~>R$KVR00L V 6Bmk5G{sV"] u)0` LǑWpl4ܑ褃^ P\ wYVD ~O=N!;FDv]dxyN89ۜsKfU);+$0Ӫ,-%V'>i!Ge2U+ +K &n% 9ef @ Sb.0" Q2]VQ9W`rc"#`.hd#."CU6IitX2:vexkvM\=0Pc}t-%Fbs\s|cΛIݑF³ƴoR&J{yke FV:Pf8*Ir-5FZb0Hhc\pH#uֳNR)\HMhkJk,l2B2 uGeVmy-W7iw|.>.z_LfNƋN mR"S##'T%k3PE'aNFIbd-Z%67< ( \d{8Clr@CA8lb;blf&aU<mZ&vkr&Mc# [P=3=fO!TM`$!Hei6qCHvwmRiT@ !E4&& Yb&(pKa:iZakraoy!BD-D9,DUX"rܛ`-Ƭ˘k4[*hw(- 6NRpjҔ2rnJ&E5O&i*2[sm1T..uXgkV@.bk0tIDj%T9M$ fR`́o|pAxj͜AlDcbw n:C2]$i->.# ,5xAn2 r5xIp~p&mWkrW-LLD}e2HW3W)Jߒo|MmRy ֘$e.iH\+i* K 9)gkjeO.`|!}1_f-6}S?˵`?..#lSym})[#M$EK<<%f`)oW}m񮽷/d#unWLsh~6;jܽ5/́jҭ7q8+]N*c-D1yDWYdt%%8M\ʵrҖUC[VN,r͂DI1*P_zYi<9m's1q hwt.!\Kd)=a*zj yMgZis>bkr5Bw[|b q(3',*_=/51#?:]o kV(R?o*)?>!dJ+̅.W(Iw WVw~vTii#ĕ5"p-3 R P]lp\{V=>8vq\{pvR= vS: pE{\=Qp,W(רRpja]+*W(ؔ]\kKvWWG+n! nʕ\Z`Uq] \`UN0rM1WG+U0xO9}pgmg7C^N j51p ln]tXb9s>wIJ%"j9L'!.>nm֔ |]3g?/.E"_ϔ[jEU7^ mԒÅZ*ÜL 'mLj9BӲ-ZXF {}VW[3^ m38Ϊʺ+\}r LB͒G6QYܞr_E l)>y $14E%Q9W[eV/0HJ5d U*bW_ ŸVxc3_.A׵nOK{*(>Va>}w{f^ň\w19eIUՃanK&DJ_>gG#} ;\R1LKe1W~p!`{\Ј=SZjS扐}Ly416Fa &iSCgWw4Ԁ: .+U/HuoP/9W?Lǿ\;:5⼓ՍW}+|}>5ysRG(IrӳW Z}t0rIF'ÿpޏRch} "r[s9K XQm-- Vh}Z=pgخi.K4zDɤo]% bpo |8G?8[< ^hi7if5a]om7[[mmԆW ~S i"t&ؖ㬁\Cyj{ct Qڂp(W(WRpjMWJO^=F\Yb,-ɻB\++ C rpYlD(e+wl=l'j7TWd[b;zhCkf  V j^ P1] _3rRPl@*ǻBvWWG+N \`t1B\+T+: Wǃ+DAB+}e Tq*YQPyJ)#< L\[Uoơ`JˉQn㻗j꺃*#tpr [ P4 㨲kz\i-a sQ P *u\J-{\! Ă ؐrpri1SP-]%,(\` Q,ƻBm U~gነj7Opq9ZuTv-;zhSn7 ?̶pr,Wڮ TRbz\!T+̈.W(W+T;]J{WLj+. \`Ëʵ\ZJYq*quB)S~-e\WRWG+iC@wX/~2ufX#6Wou褝3Wg)D:~N-0w0^*HL-'OIN9T~2b3l)+&6CZAJc3%` \`UP.E/WZJJWyA#(RpjeJz\!Զy(ؔ3 b&i]dcĕUD*]@LB+TUq*e? gՋ'\88v?M94Uv(3DV=URYP0 *S P]T:B\1e, W ZQ KvSklq*YcSWǃ+~. V(+P+Hq*quZX[ؕВ|>A{ ժΏ]Jz\!l]?Ls-J1ذ_RQf;/1isIG = S(rbe`Y1^-g u/TZEe(2+l/W(Rpj:P%WPk V+AZvWҨWG+cW \`Ayry1Q&=WrdIONjn@!\Y|pxr1vryOSCU&T"TYyŢ x) >j(}:zcczuaw߾Ys9]}xFCc F5wRc#uо/_b{&A8 fBXqgzؚպCJ ^nJJ[:w,mm2_CڪnZ< ms9;hˋZuctcOVt lo߾.EAH/1N \&.%@ ; hIĤՔ)9Zu1X˪|Xkq/7wX_ZXj}l]]\=t ~Յ+4M.㳛vpMC] j6Jbsy||Kx7-r-Ew 6wD?,wLH$% CQR"hq9$z69Iw9 [ U\. ыW!sRIVPdL@d&![c~e3@T?IY_܌]}Fl~…tuZޮ~gOfMf@i|'h VsA=:1`.?wu4-  #$_M:I\7$`YwƋ1 LQ@gF 儍.VSFnH~qXGibͿFXUS#I+eGGK9splgB?-%)9!UdRE4Ꙫ﫮9X,|NtIr Ep4Ǟ'L??es;']є7zע N Epzw\Є;҈yunh9'4} F}smLͅҟ]݉og'oց3NS`'g+ճ] 4GޜqғebkO6tyݬ+;℣q Xfhx|ӫ>mWث`^U W/o|K)+R']89fo(FyҨ&Ǽh~R?w߽wo/o19~qDO`F> 6 =R͉6~8lfMbă%d槫$h7RvRԹ+ޗʏl RmkLǣ鶓[ǦoGĽ8Rw_/GJZqi֙9`Xl I[k5w&K.} -\ \WrϯZ-,&v^x5|2"U`T^Im2FA P\8 H5*k:S=R}lM{8C5g`;G8׶y 0ҽ&g1+;s<#`}Ggn]t_=@rSnMᠧV:j~p֌#9dByG"#,$c `#"}`=ki*?lds_0ǏYF)ykΦp?ȗL_sG: hd×DR/fM )3=>Rf[ps$="mGI]l~t>.yxJÍ >θ!֜Dv؜k ؞ҥ>f:5Uow&'==v=t,`ͣ&x,F+e:G\d9sY& q#& s dr epi6U 0u^>`SV- ]WO4'$ A\/Gޜ귽hi7jy ?rw f׈жa+4hmSy){:O> 9-qAI"f9^;(VrW*窲ݘ;H CgzN}lkmidBÜ70sJrh1BB zW=-Qh)؍=Ϙ\MA-q %.w[xSO1xM7eA)bԇY+:omK(FoV\:QCJ|HG=zd+fqib觢))&X!$.*Vң7.xa"Z7CNLGAH1{ -f6-HH?9!H AM5q57q:U/7­o|e%pq*ܛ#>iKvQ>^p:m'iwVɱGGτr~Ũ]cu{6#l d-\RHHɺ6 $dhg*%,т%ZD {D hSr"x%+'i;5 y<&bdU&G߼3:#A mie>YmˠC?OΠ{ TL[Gf)9qPd.K"pGm6h4C-@n KS%N$䖊>B!Id#DW)J=pEe#&FB]ۯPc.ֱ)V;^զŖz:,M]L_~ ^,xN]W>YdK9xPbV@h-6H[͙0:z7( i#*.<HURPK̨C.Y uUpxB~E̵W mW~id7S_1NbѸf爜!FhʇHQX/):87*ƻ@oŸh ~ɗ#`_3{qB6\& )Q)NoXLAuL ֡5RgܳY}fWߤ޻:ifꤙX}*@PD0'Ôt K)8HMFn e{ꮾIE$SJtJ12p!k$==Osm ߺ,M+R^IHm] 4#VEĠLdG_'ꄽ+{T^N6RlPLH#I2i"ȒI*4UYXf12$bq"\}IT&ڻc׭.{2G?/^?NO/N_~|ne$Fd6KGch ͶܕEY* ϴ$ c3N0dET?wo i(+!ZP)l8w*~t9~j ]˦jzM먤Ʀslry6ls%ŵx}ۙo1wYM.RR|~dV;Ԅ r3O!H< c.,PA% V@R)yJX&T"yVylBT0hI<2 &͵+TmXMW*|a5x/}}•Del󂌷Ov+wҙ^Nb?p0=6 -e`2*I5:'-@'Lxd(F%N\+  I0MM$ L&nǬ9x^%;Gib׮m{{-At<ƕc&ϖ蛁hb1*EiYrn1XŁ2!CĢDC&!1HFM|!eHa5q_Ǹ/~<#iMȪD`cF guAxɒzuw:L n $wY. n\L(%JF#֮XMO^u 8mƵOjZ@ 4bo/Ij ;%2g 9'sd b>BN,q#z~q VҎCQ>p \ۚ^Ngx͑wF_2 NuQtَ;$ُϔ88~,8.xUR ͫ1'(žӌѽrëWqK^wŀVB>??T '8q}Vdz.kqR+)8#$Ld&`$_Y}]f߫y~?/_ Er}& ]\z=p!˟_-$[f7,ո燺_w 7~8Sh,_l3t|2Νuмfg|ˈ9ko~b5-4|'od)%%.(VVVdfdQq0X>m$4^r# $E \Kr1}J9R`8m,SkiNprbrxv9<".i!~K{It,oZ&4B{MJ'$ZH`Dvb=cPؔBH]5Xn} [ߒ+rTn).,cV0pԇ<MJߣd|:OG047Og\ y.,phH# k)C%:wb8 D;o»a]ح7&<5B P*J ޔ8 燐< |]i C4@% PfMN,%7d*J"԰ k˞&ʿ{|?ڳ-_dzc6` ?n3wݣ' $?[v^0I a'Lu$ksש׷ V(sh]?Ϋ֛,&t:oX0ͳYv-sh.nU;ݚyf~knR2,mϺ[:΍bcq6dʸ57M7wPruU˗7ef?$}?<`6eI rEMkx*(pq(FH۩8]%2S}kk *o:h9JQ'uQ(vak62Xgmn#Nmͮoo' ٞȖRB{,^3{.mt9La)Cdxbf㴥<]DYs!k.-qzVFRO)֔#$Y}%Lق7Ƞj/{s}yPσ¢'TzbYfJ)Cf<0Uً>/$2)?C|x8yqo<=J&{]}AptNUk&I>]lL\7I=1]錹ۼnqC%0SV]< 񳫛mW7[s_rw 6U |MqӴ9^ՈIsۿ^VۨW{ֽ*/Cad~37S}?IRW3*O!аooQAl# N>]6zξf j4}补%oTR6!;dٽ~PYRZ2+BMK?ɛސ<'_Ap|UT r. >13MݰVOxo 5_nrf=aGIs?4 LV"82 6ocBMI[fh3kY0$"ڰ~t֝ZwkNN@JmBWl iə0a^SfH" .%{քzi S %ӡҴblWCߞWQKZp!=.3D\s4\+ĨA1$u2f`2v ygnG%t"lk-gkdG[×n08Q?BeW_SRXM{RMH i~iR؃@Oϥ\ rq95sւfaX^L#5ǫ4y@$ @?}ZPoN7?&N^7?Sч~ ϻLO!a7rxQ+~mHұEI@(KO)`/}VK/%2;UI 21 |N@L)ڪZ7]V?$ϧIm׮[ùWʆk>^Yi˪{~ZYK0X`"k")6J X^ Fג)E 0Ova^m?̻ͻ]T ݱ.l˨!yC,X:ě7+BjΦR3Ey61;˜܅M5Ut qbIJ|,: P˱>t%QH|: fK`^d\e?jV E/{`.hj oe_=PhB)۞"T=n 4,?1UX4Pi\+x?>.ed/?^n MYS l3BWXL`x-YLbFQCs k&c8愐sjrnVASDGqv%r9?+Q+ɡ]JE{<xחk;$nV H\m[ϖi~}n,!.`k U cJ#68R$#Y@S-*5X@UHr+.WrUy}u'?PdShKRʜOlnR ½[jm^obMD WnQXۏrqnܦZ5R+ڥpm,8Rf{CZMh8lKlP 0(Eb![ca"!tvX{ yNYwY{&,f$ fFM%c'GgO9\dž"­cB)7j<[֑aYqGkZ I S ٭s{Hu)tHyƁBR@i5t+j>;HЇc*оR]]{+Ds@ZxiqX]{ IKpח"@ AHRVT2LwZ Щ#chnD[#ݰyxIAoc#˿l$N'`=P5ErxV}QMRRӢ6*Vw;(VWS)+1-Mhv#nطϲ7Bԉ-$L']L;tr:A;eSibNopVp<^tT6؅Nr qdp{@F| ނ0cGpUR[J;9du"JCQ+'9+Ro_?tѲI&5# ӕF͓txzWW0i gkJ[qɃŶTx\Kp_c*-8S gߤTH:O[y+MkPBn}wT:-GҪ;ƼH6[=^A-Y[A<]>6I]C/0 ۊ5`Sb6Fٝ{:yٹ*+=heuwe(3y_CjQ.zu탯7ھ%7=E.?bJU;Lgi{$K@_ʸKu+ڵkpSqF @ 1"gޢrr95Q{|g,qGRMQ)7y812RQPy="sXa혐1;4Zi#M]Ŋ}:W@l.Xlfoۡu1ge9sJWnf߆LYJDSbv N9Rbxk*UFN8.(mNPH2-cpk[/4eۓůNsܦ9ޮSd^iYJPl6t{%"c9sŘ;+1ڛʵ#z<:=R< Ɵq2 6t()Azdb5JN#ȕaC0 !oSlT No>\0c?}pO]u(=~5~hkdv}*|z8&+*r!32gE0OV:(KG*\4E8EHgf# #  O@ p s"brD !P;d֔Q"U€%0L<)SIhŜC 0ظÝZg ێN;N"镾$.1]ncV3D`ˢ3|I~LxRXs eLQ'BA[AA7BڎCxF٥HO#(ϗFyӈ ᴥX*kb̂Я:Zi 2p zZCkS1QV(O׍sq#EMg L9٭{g#|H}8=~տOTi0CY/3ާFo ~7WVZ-s'%Ι*WiAUYR꙳ҹ'9B s~ {!\Fv>}y!bf<\ڲ)la)d?e' fz1J(  ih*[| Y:'NhJ]b}eMQy4neouRPX*܌^;(|Rǐwx4o||z+Y7ٷEfGdWCp$y8מ!*>(+?Q$Ez jї&tS64dY5ch7Bjc۾PuK>`f8ըtw_ 񳫺򶫺5ܥל>w%½D2~pc\Qc32YxPӏ^YKA&w30CXLCʌ߆ j'i6<2VQc&Hdfb,4K'ŋ|f_~Wsxkgc4m0]:Bw(xEi(M{RMHJ`~1AOgB;A|Fj|5ƢsƔm@,(Dvԇ GH.ZQB.} x*AF U9kA @\I+1 +DJHhWRN8`1waʽ3ø)ai-~A{NO]kQme=F:OP2[6,(,BSV\b:yUjV&|:O\9LzX>aV{ L<@rFϵ,wJ0;a˨^0E͇-B]M5o pJ+J;MJ)RPE1wSR 6R#="#Q+t*dFJua,`r|$S {u;*e7rA='a4UTscerr6xΨjgoNIq>tx3`.MTm*8 C/ZUasp?s&?RrMؘ6WCE.z;C;S |0af)"p$&d`h%HtI%7 R=^#Z=h =u5="KmX =LA!yT`jޯSl)v]4b}x45cK& k%e [s0N#g: ;pX1LrHd)7"X|nCwD@4:&NF03,*͸Jb,#DG]8P! DёfudCDj "czH`{DkO 0h ;r)Eǚ>k+jH%Ј+Pex)P B,j"@h9`xܹ̣E0DRkgd6R傇%a2C{1m|˾ i z1zмǧ@Oӷ@-e :i6smϊS00lb'M8EE(Ù~Pβ3M#@HD01:RL݉NÕ\Lg@N@U[[  S85iatHp;S_Fya^E"e4C)?U6  pMADE_x~գ'rzK#RI4Y,m3>q #LΗbTZa,[[S;+'Y~.[^ _gՃ&0 I\a/.˽z+Ƒl8)/.p0|jI!8t4 iiV˛ G} FOsmcqVZ/i֦rBMQ=8&p|14tJ6Y"Wgo ePgr05g?_so^~~W<7瘨qq;0I=~S.]ۘ櫧3hsՆ1Fm.aMhՅ2w \ )yǯԥu$sq.Z\ =+UBBSNFr *NԒ+_EtAG%^ -MoՏve6H:rɃHSED !c!ePj ( lJ ZV)%ӑ>&yy=yS8d$25r&%Q^rPGpFYADc *Z/DiNRv&*vST7;GljvǝNR³ zsy+}}U$2nЪNsߒA?M^lW=ږ1 9XW/{]>*`Y9xji0 Uf3OӒ`WΗZujvvSZ >=4fנ-+)LFɎZ1ؿ=bba S|+c%\hGy,![nu '1be⧾ƃt[]S1A6Uyå O=cF@1gT NVF?0 }Ikhȓ־srCc$ `|XBm[ !كw#i  (Vb䴩ILJ< s mz$;Yř4R4zZҌb=Eɪ d4qj a4Zx!lBbcozDEU%Fhj#⧯<(u^hJ">c FAJAjKb Tk1&o/ Jy^}[ v$9(rUjnY^ha.'ۃ=?4? `퇳1V !z?cT}eo#I̽s="]>Oxy>|ݚ#.mKf=޵!ZWJi?m)-]4u[Wؼ6s3gFRb0d߉dJL޳iD}W60IHTJeiVݙgX &n^x\&#`ʁULܚ*C4:gЈj` @j9LLzFbLMq6TK5E^VR@sxPA'A/ oP'jhf-:aw&b<#D>`-.`GdRnlWSlvB .QV."u8Rmb꓊U.IV6ؖ$¸WT Z͋Ar,Bz3ŏ!:[2鐿VD^6$"]fܖ lޢuNZ#2=F\mB.}9 f [A3l+*4B5چ!ZcJ+`J]jbK1P 6K8vڛĹy+docܼ6ʸ'| 2(h{^m!VOe֒;;=.nm!RC@=y(K%Z1m1d*ZӜ-6fXXPJHae Q:АC -GnNGebn jۂ~BɃdSBD%6MYW8fMV:mJj+yXAŽ{Yq1'o!g,:Su5*Ċ>:Qz!bGnE~ƫ&,T+ET$bK)l/w> j/轛,tUJ£eͲ'ޅ5ә` KlٯKO?_-obI?st~y>ޢYS؛׃/}7Sm}=|r!j''ow&SO&>a%d70;`rZwM.3tBMes[ymK& Z*6}>X %(Q`,TqU:B4kS/4nl{$^vÔN66M)"pnh߹Ч& NiaPz|Le1È,;Èvgq3.jFLUU7Jn;\u)&zp Cpޡ.7ȴpե•^%wi4t j*aN;jr̢ 谮= m?=گ@Y20|(t4徯J{R h :DO{y/RCQ5qWc[fmll CCۃ=07G.̗YU%g?m=~73c[WţEGq_b;df NlcOon"Gn ?ΐ.-{]J;F/CHߥ5XhwઋkiWK~ KWvɻR..].S=+:+ lw"W]ڰ* F jK9]`ٝ .ۙ.z2إ wл ;gA{_&.äexzv6Mp[Xv,݁+agJpR OpkC/+C83xZڟY*[Htw(뢂WAgm?`^&dzeLxc.sQڸ?i:gMx2`e w1'7f?v/{_ҵbz%*~ѺW?4+ᅪ -Z3_TGꦜKaV]<תBQQP :{o-kp;Ga3m^k͙<tA>CW1D/y@| 5"s$B[T z;SDYlA2Dbo^YR${Lv}פ=NurG[1h3tNRq..G!,%շTnv&h su' Xb%Oh jF%)&_ɔhBD&sZ.T0F`nyױ"i#-;(-YƔv7J]~"磃χ=_]j\2*2XJI8 1-IU-RuCUcalc6%hPZIΜ9C)Fu~"ܡѠ́὾8]˚&CqӶ†m6GkTv)5*E&!S:jUbiͻCsi?_[W DP=B;z(`oS=;ޏ^mr3`-5cuHIQ2hŧ!K%<*8*kEnU`9欝K:EG4 4 E4^W۷9mڻ1H ;b%j:ď:GkD|: -NڠҢ vLaSS!b%IV)mpS'SxD:/fʙzFo=dCFӌTb@-6r%d0j1"frɉjX kug@CBPgG+(oT]J SP h2]h\V*:ae H9-F cVf0hh^ !.A f|)*)'T b7XK9(a3X̤#m[B"R*ACmj((Sќ E0GRvC7H( {/uU qj2 R]UW.#{/NZӋՠ%() El:hp 1r 9 VDlZ"5ȲS"ZJ( er1! aUh#ǻ}h2&236~! _ݢݴ" '4c֛䤡bLbbNR'D=&Y;iLzٮ$)?ՂUkW F=F/ L mZI|4xḰKy@xZ] YiV2Mt%ڇAhei ,äc*Jr z_fP(39՜ HDNW2^i C.kwy4ЧB,GSе 2"(QwPҧ{w lR!Q _C݅Zc=:$pmn( XRr,a3m>%TB;뒄 Xu˴{j _3.D[vPhմPNZl,j PФ td!.h]`mBgPHQMfWbd@BUq@ơ"2ΪU%? ʰE(3A68F)ت(jS,Ԛ}-6?fHmY4g#MGhTf%Ei*5^ZzVE*~'XHh-w$a:J` ئQ}Aߦ%L0AlmRjh\ ֣sy6/]ϻvn|q62IVcU PMC*ih$ti#ll% fPp$JHuܵmhT=kM!J՗yh4vM Ƥ<嗻{FзTf>Xe9n(!/[t=$5\Qr#Zz\[^:(Jj]d*2(eFAjЁF-HOOdPJw3`-aÖ Vc="($'Mm2X{iP'7 B`|?$oQ^ѭbp0) ,*1"@ja`Y;p`h\t%i#*ՠhAgМۦwZ,nFO -jATAlR>Cɗ Ug&dd, 6#TK.dP?uN^nY%sP Qє?joZ 05J kJ V@P8U8P(-lZi>W&Eυiq#f0YJFD9hjJ’=iVA̯^#p \T"m \4mۛti J.ȎYxЬ7$CtM](I WKh$dizc[A9oU =]шw9П`j?.//ZܼX,6?Ek/3C4]Jʽ`~?o}~S|.Mw-u̾lZM{̔@<5Gσs=\iRN qqDc'PNct@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; uEpJN j:N n2Nhwr:B'XN v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b':# rq\"A ^v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N#vaI9&N id@@@r1; D"׳@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; tbHCNɸtKdK@kǿi}ƥc0.in4+ę ]\BW֩J[+] i'CWWrtE(65e:2IoP] EWN[,eb G]UϮE͋\|?\4zqc~~n:.rX9kWZgia&-5r?{}my[j%_te8ݽWv`B͋E] |v%~ ˿},VoeɹUSѢv[ӕT-IN~J'?dndJ.[:ut]bCx 8ΦomٽQ&~A_m”Vo؋Nk;b}Go&H}-0{UP+yWQkݯȷe[W׭Sī=_=/f?b>NB4ƟJu^&ho(J3l3MpJiFhge\ai朌OS`t("a2tRԖ;ܔk97鬂"VV2]]zI̔VAy_N$ZO*(BUB1!`#*5"&e p"sW.Ł~hj?jd`؃c^zgB] &CWЋ6CPtut|^f4~R> ;".Lmc+B9 ] ޏ`d;"23]#]]]pr2tEpdd:Jdg1s{^۝~w[Yl6/mG7cGƋbfhIwec+zKQٷx沒^9b/J)}# H[8;({w&2jdT~**F;vA1 urBt+'@Ъ8v"G唞7tV{գ_D(-#]A\y3bO7L9Z%NWR3]#]]1ә"n2tEh蟐OCW]O$~t_ݽ`?^p:T~h5i]r˞> {Udzl+a1+Oj?QZtutEAJ(dp?*Lc+BX]#])M+z:tEp ]ZkNWY#+5[Q,{ޗ,Ww\7<}:p'zXg}cܟeQplNe=TՄjdjTjF!2rr5j<OTvda"[m{};T M%\3ob4 6#iS+l ~ pDWvt1]!]s+6MwS+b6 LWGHWQ0$8+BGOW2]i A]ʼn&{B'J3.DZLWz:~+k&CWJCLS+BPtut JO=n{UntE( c+cB<=Ohʘ( ]Z;+B4yd-tc~zl@-/#[s-h7oO ]a{u+wzqj*ҡ޾[1K˴,gwٿ_zZ&|yܜVk'(RuU%Y 9tW{pȍsuf>~ȇN{鯟nwi#Ewz}WNG Z5HwB%S?Vo淝_qu=6{e0hMY}q-)kH$t>LvBչTTm{/B 2tu6MzG{6I׭͋EޘaPЁ\%eݩRDM k6REQ-d Jٺá~YϿubS].׌̻"Jb{}ͫ\\ y__dko黷5!-6C&s::gk|}JoՋy3$}R8II g6ʨNONr'Rn.>k/?kGR{+ '} HtBE҆zt/r ڶ"KH<(n.8VSCtw޷-= ]P[I%N?>y~yl~V'KvxPճM1ˊ- A5{yHuy4[rsuV4Smϖg;<+lhC ndÕ}/*w8ąΖk YOo* 7露 -W{ӮwV[+rm3Ňpg軶nb9[՟Ϥ6)JS.Vv*pJ{b\>ovYjoM4&NZ7CH1uy}E }h5zs֛zDzs1BS>Xk(1>Qs6Rh+7II-r)C<-5Ị@m+1U+?]Yo#r+~[\}I>_ KN*eIYYh3u$"QD #3$ J69wcbw7o[OS U;䇰lV?p4=FL_?La5![YIrV9ʀR Y@dlrH ykUVMYs2ITN>dHG4Lps4ɬD 4f✘ K!4zi[heq0So_֛JV, Q(oKa|KCnB!5{[kg$a&/1K( 8[" X"Št1sԀ%vP.Gonc0#% @dbLqJf 1]f$ږzdcSw4+m 8mmrACҠ=Y<*0A4Sa i:>vQj9A2|e'&' x,E:,2(#`QkL^ǐURK]$LS.IRRo: $4iBEBxJ(&`xfW\1"4:z&¨@Lſ6* :b^k9ZJYѣdx@2&cjƱلhK)̳01ȪaUao*(c<ǠÛ=bg$NN47V@%Y%k`vEI#!~/q4wUXT@6SU@" GPr3CbsFJܰ}Iqz/[J[-X'#cf7ÂA 34XN.! I% !偪bnٮ1{s}'dOQ4޽`rryVsЋ^|eR E1iAgIF% ل 2`Hх M9^, "QȁW9VxԘ!o1$3|js=wH=Jp] ƹsslFѳCO؞|3 g׫7ln'2'nYL1ۓ4Gq嶴!},ʰ!ieMTl#L4@*d̪7oPjAk:0e}k[,OYہo~F&H9z@꾁1ߤ·1uy G/v^}Ӷwm]:^ESRbl DPJIy %PΪ+KvMbM:DBQIb$@&+e3~-*17MI9.1b3qcD#bM)ښuX{TCA}{'j>6Jm1|E\8d:j{5@rtF.2%H=Q'8^NIuZu($X{KhT`  > tHHdMQ#<) D@n3r @ H]Pƒb1d]R,4Ě.O#_/i`L39Vh+٠j6DTLH@+m Je (!a8LF{L-L [oi%qC~hPB3=96*icl;OߍZ=̷;|swgGf|nV="?~^*^zZX'kE"XH:(2MHf/V"R۶3ֻX3yd`}T(HJba'36Gaff }c__xQ_^sE{V֗.V}զ79ݗh6=6)5FpAX EzZHK!(k:1+T4( eDqKQ6TQ ҪMm]fa0> ؒ^14\y\ _#-pWWO7yG|6542<ਫ਼̏0̬O'.2~7pQ-L|PG*o';R2h#ͻTuB:uB`VnQکHo_X5( w퀴 i**s*m}?[:;gN_YixbnZp&BfERL&E00—UբWv^j-zs,@!U2~T%!Ze"2j 9tpjS3 ^ fywUY7:now15^…,3Iֶs:;6mt.4zx2}t,8Aj^İ1Nݫ¡Iu`cR2S2g11&)tS:PG5L T!+©Ȋ@"i90^0xU{ eY]GY qb SZ8<ݏ8Dli<^ϦyjazCPakpFiD_^F>u$3u@FuD@39Gۻ'rHױOLчmHwGQNf b"7kþs>0PzY8&ΗYMт 7nƲ$ӵ !uu;*^!Nf`dèpTq*UqC'v\Z[{6?zrӃ:?uS7.ĩ? yPJJkO]g&#Kel 7xn9|;'#n&Q:|7ȎG7>̏OZu9a歑azww`TYJO<1G&^\&kj/=[D>fc^3T:Z>EfȴQ-x.<+It2PX蒑 4ib@3ALpN_O՘̘ S!u1H)uMqm[)P%T ).hPH(zMңYEg⚉sb_i}֧Ñ~:zX e/.[|c?ocpk"<$#ۧVdf8e.A%˶1uUT(0m̂ضEԙj7iJ)K*lQFҵ#} Lg -d#UɢuSf8Jٻ9b9ŒJ|f Ƨ!S{w߀Gƒ"2q$ QJ"ҶGƣgz5`YwA-9G(/ξ ;fåJk+D6Y`!]FRk| :h'd1V d#CϜ$V=XsB/1QHFr:BY+dRa<)TOi1E-fa?y>ZS-Qּ(dͫ@OJrY(mR>\X3-?.z=}(ZBUi{Z  0x/q'h#_ BH^_"@Wr->q2XXzIr^g|*o]F#ԕ>wh5X1Ә ft6ԓͅ2fvX'C0Wz*mo`W+%&vvuYaWnӻj½9[N8?' *%OqF.~熮$_CՁ0Eэ 7=}7(BM2$/FLߴKʹT5yȡ@rzm?f}]PeοKGyHaYn.$oZC&F߬;@SF^SVM4e!RXdT{ laQkacKf:Zk<C~^;r>+ɔ$xd.BQ I춦!=>?Kq6ߓ.ݷoll\'n_uSt1.=*ң=BzsU-Э~RM'ojAo>NWI@gl!OK0Dh|']FRϬM06"PB`s~(Gj{KZ+A[Fxe0#@]LS+H@cdD1<0jv]oslxH@hUN3ep6 2OrH hC郳BU[MwZ@MhwmAir8ڳ>dޭ^U`wWN_ݓ*anpZw){>4G"$Q6rtjèbsGH/[Wm\ \ʠ@oh;=u1ږ)wjΕ mҼƫ&͞7715YƅrTEDL)0&-2bVL| *d;Ƿ3Iˇ󁏠_3mvv0Rh%̾0JY\&ؘ uf1D?XzFQ}T%yvt7b~>|`~=g|W/ؾ?z~?>,څx?Aw ELȰҕγR\\=T*GJ=?zR+TYʾx0I\nuҫzy,Oj<[jZTrweY~zT~Iy~p~6(,:j6|RL,׾Lu$^y;z ȥdxy޽Q5(?jw%僲¦ Bdꢇr\ :7c‡܍vm{lԚPq(zQh噆QQj_$Je"$f/!*\&At)c$pό"s* IAς4^cp6Xռ.NJC$H\"ۋJ|kJnBg&6CHI*K.WY043^ aYxӹDt$&s `9xM/1/ i֒^hô:׆ 'rchJ5̈`"D%-GܶPxXiyɆ}O֍y8-)QF7,rh"´ Y rG͕fЗ~D?N4}ѴQ8ÈAse4YЊ:xX&U-SGh~O7Cj G qǼO1{>]s`zekd>=)-El nLbKp-<erO$D$Xh[SQF ƤwNE6؛΀/zqhFی:OH\$VeR{M\,ᆏАZg8bQT,;Cx`엤Oŏؕ$R(j<[UWPiW?J)҅,#FE7ls,[Vm39q#NN/($R:uR%Ǔܢz||v^?8^UnDks> |88;ږ˶]IQQ*Ύ]la, B֔d[[tbV˛1~QO*JQ1he TF.uQudL\vb`ϿFYcQ ]==C?of*o0"N`|/oOo_9̜w| q-0Ί{Q?n^M -FlRjuoP%MVyM7'M].f~) 9m}?.Z<m[W'u_U߼-5: b{?D/I_"_܇(Z󮁉+ CzECt_u!@w~T=#qwE%RX[J8xD)ePJ=tUµm+DBJWRv=XV{:zt\eKWۡ|?tU nAW]*C i]!`M[CW׊e:]!J:zt4eвѮTNWRutkAi]!`ZCWWOWȎ!]  v_Z"ZCWCo ޟ]=B\l0\nOrŻ J_4TeQ}GE\kMg]TB !Hg0?WVdpg`We4\Mcؼ/n`e >RjPJߗ`$AH$Wmɼ%P=_I,œ޾z-ϲ4gZ-EaNF˄duѭ(rt'Jڮ;v^˞$gpVօ<ūw]<BCPmV媷 $ _սXkKDBd_r(:/?I&Y4@ a \~CSi5-޽ކҦ5k3[6C PZ|?Ƶ;di ]): m ]!Z~fPR!]iPUd 3(K[]!ZvfPrpCW*+D~1(W;z]=F2} e9B='dWf4c+KM 5 Qn+a`NHe;\C+;C;b[j׮HCZDWX}1\BWV+DxGWㄷIB.%-th>tBBtt &]81Xx59QQJ `_|~>MzooSգ?}&V,U<ȜIc"; ɴ3%|[ 3a˫"~:(`GӲ-*Ng¾)m?<2R&s1UeZ KRKm\{xVB{qo-fW 8&iaœ$?-~j 26i%kmI0\h!  &=``G,iDN&jh[oQds٬buwQY V̳Du}@ 8 {ܺ^a*cg$W 3y3ϸR&qQCg_ bw R7AnTuXo!Rڍp.$zi_?N^@YQ0'u.жȹ7IA#V^VoXBx#X-ׁro.6I'N׽ig3Pm T;\v*6 ({X;n8~Xݟ9.&;HIy;N7byUo={5ɬƋ*u][-.hjf3s QMIM0ۄ06\N\?v)f}L86Z,fQe^ZhҠ*U\ wI.97F +O&FeTbQڇڎp;)-Q~%Ø̉fMGiNMo`&Kq\}\ +rj선hh䜮t;]<ڀml`%U4bty @Jy WO$vN`}:[AICoc*JZzpZ3Ft-Dq{UV(l-vR|pőv> }BB%Ŵ+/:al-;r 3fuFRiJƹur&$г=87f.)F5m4lfq-h/՗#[BS;k)-[BE\7kr2܈ DύO-7z:H*,6+ 5M2AI>TRzf8fAG^wGcT4?O0˫~NeU^HvC/7tj_D:g4Mi@a;ߛX0Hb!,78sLsJi!l2!l1Τ2 l/}aJ]Յ,)~? rl]p79y9 \O(dҕ,tv^db,TgaHKJ=;׫rge~21ٗwƎMx?)LGAY=!uԉڦ SvD>?˚5H?/#tSz[셯7zfo>Thz?[VpV^[Iz gWxW\Ozg['LC=Mה|bCaHY)Lf V >Ɏsq2Fjb.,qY\y2!3 )h& K3{5ilr:}140^.`&2"U IM-a rHXH'E`{ X{yh:̝1(Eu)/-nqMu(eKG'>~ C⯥&>u +λqVOA|ǽb8^WBovkI]zV}W֥f2,Ts#ׅ~D/Zov'pe{\_{h'@x&M"ܫ, OJ2-&b4خsU28 lj"gZ՘LeF P!6xʌ[ e%*x$[A U4lނG`(/r㗯fe&XsL9ƳWx{x Z:g+}o1_ߝ͒ޭ)`T{W뒗1 Qd`a:'t~),s!hYُk¢]aEA ?eR)aoa{yNed>PEiw7J)=?G:ئ&}HO("ɣ"Z`̝YO$7e2HTYC<Ǐu*>Ӳ(FNIu8\xTo0pt"1 #2f6jvY1)-wsf=iq8[?~QJѥYZ{9/, ͟~(&VT dER fFZl@SPx^eā.BQ+5>&`P-0<`FOPoy9& Y"HqpeXkJI,(BL6 XuI.rRR8)܀ 0gt( % Cc~#yS Dmۛ#y{%#}2E$sqLB]<9#^[u'hRKbp3Q dɭto% uNdBRTS_ٞ|_e $o]GsPσ,8̤{@8PyZӠ׵kO<9BqGIYƹ"7Dd×} &@;(YrYU}ZQRE/&7& R5Č1In;.2>`8W0g+1X3`jJۧז3&YUuU%[)Z"kumip/#3cqY*0/'ڗ_*M? ogИM OuyDͰQ`чݾ.)CR19s?OXEYvZ I:Wò4b zj=w/+"WsBG]%/4HH8̇Nr"Ue s~`bWtL4"T_=EvD+X+v(yZ:U@U*ͺuG+:*^gk*2n3jm^ʋ*k3WhJW/,Zw s(D6rkT ,bijbUd^r& a kHbO{܇ 57BhP%`] n_"GU'sG)nXpjO} lR2d+ @ǣE9&&)s͂Pj)cJG,(lsDHY"e}tmub;5F6Ub@װE2X<TDJ@h֯RV~dYV[grgtae)aEw)ncl_1ej8jbo].y|U[^/}hjդ6)H-L9uM頸Ї.bv䥷|0Yr.ڲt<BdV; ^x2\%gU5advݔgN%ۭn8œ^+kY. 91MPC^P0R9h%#ʹv)G{yd*[J M?J墀!;o.}y_ Kn.}?C8Ƅ,M}2:ڎjs{;ab~NZSMS Ԧ4thZWTARD3ݸc̤`>zyu 糤X~dޯĽ(}KS_7sNlb(hLF=cjjS旵Pޯ5HH.\_pgwӠxO$_0f$É2~ hq`Ij'%E2~=c0W#1ۛno7Rn"y5<}fx_@-a~}Mo-M*:8Ŏ6 S֖M)G1U!׊֤,-U׮xa6'(H r n]IBru\b:2|l|ͮ{Z]FJIsW&g7?-,*^]HG@q-z/A;nrr Vh-c `VySRlet68m3+G>O,`,mGaCa81%Z0q_tU\k 2bʥ@w.Ѝ&0\Nui;cv~aY't6$0goaD )"W]P2K&u $Q20H4:lTh87ɑ"$Mx4޼=({>dJԄgU. D,ͦ<# 4%$hE U2.h>ef@XOR3M/cQ$ҡ>kYDY61h!YZ #,ABH:EjWv24%n|ReK4)pH*4Sl~y Pj\c!qQ\Pg@w Qt`}0+nu׳3n RRe S.)g,v %Dm"6cA#a3.8f'pFt/OF74^ [M>QFUR39y^ .a*n?r鼌4KKr?N_F) Y*wt߻v$R_QWu|;k^= ͲԎULd7UG&쉔FX1a-ىd>|ʽ 0{ 4\^\kg9>w0aSa.A<[igv9\ bŃO>^[`lEOʞ@-rU7bu7Rf~|HઞܫAf}/7n^[&W^0{yTO)+>v5 R*tuOxofM6zt/?p_C <?, М* <[_'}l5ʶ&]Kmmoѯem~8WqoMfajkeA? NQZfVGD㯌Ke7'#2MʖRgSIcYMՀ^xclz-?ژ0SprHK(t)W0 zTz IcDͭGzigdʫjr#/FD@FypܲJ:F)N:8Zrq"jnAuk:;UĎ8oP߸v]9yp%g võvoir]]ڙL(Ҽd蕭>yT& \A`* %|M6$|]9W[z .ia2Z#QFgyFt:g ˈ`JZb2?=|4 ^3JP_7?|+,竅ǒl{c;o_4uA~$نl8g"˕WDAG1.`ulsy<;Ml'Dt&uc6R&'pF|aQFݪsoo-,Γ7ysW>3I m%xuNtD(PX"tܠ7ȓcRG\ G*h:lEʻژx++v@3ݑ.`~q+Ļв\Zzr6W&WXi̘DJBQ= ߍGĻdǻVx`X!by9`\0Yf)ԚE8s^%Ԋi]R{CNSGSRT6)׋\lL0N8&ЇFќ0 %a((NĜ"F8fJ̧!Sd("EUyZS}I)w֫nql/]RdFZD@,U68'aRZ05[L}ݺM'*v ښ;:w܁ GctՉ܀F8UqUGW9.U<5@|._ 0M"w=sdIKgCւK.7ERԹ6smLgL&H%e"B#+]!CfWJ:$ȭ*X `0 < Zg%25YKγTcĈT0glxdk\tߟ]ιvr%ѳsϾv] ,3-TdBԞy罌)``8J *nĉH<0-qG'~ ڨFEG<7)G2iMmMȏsFnM=E/X?n>vdd_40㲽/l^h;tt tdboU??痬 7H/c" "@B/*瘮DYQ̒w+-1,>-D<YP6d'nhR0<yLB`\vc;AGxt_* Η^Z4VKZc| V%^K4ZR KBX ]Y3]k#:IllrY#YDBKl())Z9A:BDs))hY[gOQ8,kE;0e:]smXmܭHP1ސU7p;+"gi[񖎻"338yy - K WX@+Žs<|M&\Ot}(t/jx,9߿?]3az%?~5'iQ$ЉHT;-H)o+;&_Inj' %//'ͱ (v!` 1rX8/(E~_Z`1y3H*TJHD`X~9`.9(6A[ R!J*p%7Zd{ Hi],+KuT,p-~7^mmcٌif#Bc= -5A_y [la=p:4VO | >:y)P}yy{AO* ZW*]Q dޕq$ٿR'{G՝A@ KVn4Ts&9,RddSiIdU2+Y$e"+H(!d{EQ2qF!h,$"*#e[|9FN=a\-1吗> WS`ZB*Caj=xVc&ye4zl5ͭd7qv Q FnO/ޯ8>?w~8]+pK HE.H -Mp!DmPaB !0$5AZX,@v~g'`.0pY$SS[)r}CK5,ǦN2/9z'̱nc>x>8 7AIK÷Msr^5Un)㓪ދ\()-kp{&8#!LC%p*YJqmw #>vF(O;yXT: N9ťoBb}`F/cL{glްq[,T=cXxX(mH7ͩk_MҌFa5#v RoH",ք o=(^2MU%*tCr'ZK(IIta!#a:0-IĹd\}AfǾʨ-حGLPK"5Yrw }̩ȑs\h0 T!$xN#88f!exю Xx$QpXA8A3&xX;][}AmQ:#bFm3,pVi X0QgՖYQ%#N"!Ɓ3uS07HL8cA`Ir,}g0b쏠0 b2sy3{ef9w۲r\ǘc|6wjܮ_ה.7ҍ?7M& jcl)N3Zahճz('2p:oNݔV_Jݟ~%2@R_jY"MwdN iwp"=nL%1t(v\; 'xy8Wg/rdLJJ-Ѣ2eh3;˜Bx9k)'W$Zeڎw&ugV >] WOɼ)aF0A}q hƥZq@YXVK>,IľfIJ}7*zH;I`` ;\%)1p•\u@p`*H޿KRb;+%S 3ĥP*IURd{+RcVMvRlċUiZZ)٣&.xUiGP 5SJ+5tVbd74˴0;Gwhk/䎍DAϛ~XBï/yw27NƻAv~*{s`1V!cE)% ,Je \{dK(te+s8v/&ݔX_?=Ec[=|e"CIܵ'?ߩ9tEz: K/MuJsH J#14JRox#)4r#yG9ȑH2&Ν}b 5ίV xp1!l{ m쨾~8?+Luqoq-|,'1 UE{)=;IUj\]?F/7`;T|--(fLI3b&Q$D1(fLI<(f:LI3b&Q$D1(fGS+vH~I`-&~t_$-&~ˤ\{՞g=~fסG x|Nr|͜5sf9_3k|͜!sf9_3k|͜5sfΜ5sf׌5sfq1sf9_3k|͜k/ GBZ~|>waʒEP?"-قY|p/} s,9z@Ո62"O/ZX9& 1'S$sõ 60/s á+lRD;f$Z&b+aUa) ht]ml{7P1%o}R\FYmy2¢TaE5;f[Y9; #8]1^73)^e.,pRhH#4 $ 2ۭy')o&x ƌTh!*VNҍ?L@s̚5u5ԃ]M0ޅtԣ)33o~ UF~|dҙa9vRBWo-1.e;JQG`ـy~2j$; Lcgaun&?NxMA_{N^8{[__8%zY;VݟrhTw+&5۫NUn!u1ylsPuS)zquC2۷ FC7 d %ݫֽWj^5>fɖ[r3z6iuA=ocQ~6/:m45wEWZ6dbI0Ω\wھNUXwڌ;SŮn̒{|zp0#+WahL(yNMԞ[8\Kaey|;wF\uX;&Fc);U \M#QU1H>HԔ1тFQ4`py'7qvн,-'<ʘ615.Q"oYP߄rn< Aj3Jd$(A@OusBNꢤQraSb޴~~^4-6GC2nꋌ=YBd/IBW퉦8I-bq[S: 9$2"1b(iґq@$DzHm~\O.5|\L%`82E|i2̡Ot 2hY:)q$UHH *"HOWMpOz GPO5x]*\-FqX }Q!bf\ڒKGŚr|h݃Ƹy8FN@׿gPOLmdݫ&^L?}lFf )~ӡL4dp/X1ykj&X k!e;l'djtu^Dsrc?8),|Wz)z}nRyMwYD|0^V$j Ix1jK3htd\8n6 7ZT=NqѣA{NGxis-| ߭O?}VMt_WMǫZ}'?iԕKԄU+?ݹ>?,~ΫZ~ ܼ"gEjU{Lv1Wq//ȫpbw?>ÜG.be&M̅ЩO]Vÿlmp8^X]ВAWfu'}yUZu*Gj?iS3 R+kN~MӵK|.ϟ7x :,̎UBu\i 5Ajf)0M t8` =p3;tNOƉz9 &s]IS/H:픿Dn +]TlP}N/TpkozNKok6e+S%/WGǷoK@fȱtL52=z!='v`$wmIgq@FG[18.)LZR,_ IQ4)P YWTu LҴЬΚ ڨkf;}bwM嚊Wwn7,n4{Ns! zitfV$ᝳ&]F݀(N,Q,!XDHm8: SZcN+XuN(RF Qg³4xe2 3Y ܩ Z1p`G1$!+-z,zC&vExcfHV ~ze; R ]BJlHGӂ~ٟ\o* 6:8gU'*gy1 W}<xt9qS!S`2|>F1V|05hb:z ju:(.40mUuϟjRGTJid^WhHy_eD *X+J(̮bKúQXQ%=ڗwݿ&^e7ѧ`}Up-2w/my%`zDO]s\6g|5im&TTVJ<#]#*/ fVI)}u.0:pJF|aUաe=ft-`@עց )MWׅ&мqzĊF9 w&/ObI߻mn3N{;Y#l$rAdQkI K yqF9}ow#pS2oi7r>ݨ5r63#*nY JJOt,G m8Roγlip {7j؜E/pzz"`z'JsF5m2a>kdʒQFlbBxD #3XYl"]Aڦ-[iA_6೴!EPň!HdLe2Z=_A(qa\HqKJlԹlG~VzRLVzDI{?cpX~)KRqOQ(j'{?C2]X|>D8 a3.8bFKe8uIkZ?VsfqUyi䑼K W&0cjr 0'7DE^/Ұ.Ϲm1J 7Ljtۻry_ի]顊B-xv=!+{5RbjH}R[޽w^k./5vQҜyUE`91asb><_kV.8GzѰ)?| ;Hؚ,kGz뇑z06OQ>>WEDO_?r֎*ã^rݨU woRi&#e`E_q(.FW3~{|Pmx&{ ;b??Տ/ӯ.x{85+u$ؙ_v#w ೡ͇nBƛ --l2n|5~.>Mum\$eiCJYuzǣliͦJN*[ nIoOokBR#NR-%C+~+nDkC 7@.$ѣ H|6oFZqi.EԲ RkGfQHM`hbg#=aUq#/#FDŮǑWch#%Ig@ [ nOg=*OlY>vˇOU϶;;G_ېw^U]av$#Nd4HڙҺVl!GeR*qW,7KH*|XϾ(O;]&H!:eLG$=Fd:g hJ"Y2yqWge)jxM{:=:~o~5dսRh:m1hꁐn_PoBɲ#FL2uP?c2NYHEӕH{ӣK{YKh# BN2\8֤`xɳЙ- 0>CV{K<09m$HW%Ll1Otm%eZj+xb"qTr%!HV1#s3yk#Iwvn9MQW PVEn4}nlٮ eHr)qWv UF-%(I%b8gT #a:Б%?dihQu[M$'DUs4dIrO!3r%#3agkW׵?|f߾?\(ƹ4PBْ-z"tLElhJ9bN[ k'dؗ 3[V:ѥBNw)x3_ٚ-]Y.fs Y&C<[.]O /!UJJ8@[AkA3x9 0)kR;ͳ`pMU!͉uV۝S+W 4FK2s,mv+z0eSdV%CE.ȍZȱ|J*qʘxD !%=A6`$EmL^eS0] LΝSSH-ktF+Թ4s܏;p2;֝h h#h\Wp*ǥ+ks> uD1Lj3hN ;9FKKB?Ð h\>؄x:F{j!㱶Zdw}2 f6Y~D!H-ټWbs$tz}<Ƙm&y>-jҹ aݽyusEx5?ySW 0P%s}3N|SDi:2-{=v3bƳqcvt3z5}NaS`o%Sg\Vg\r{.UVwm=q$gu /7$?2;+HBy4-7ݽ4xlKOd(hjS~ U]UEd]wWEʍ)wW]|O+`K7kݾ+VHuwU׻WVgˋhޝ~탙0=yD\˱h;cOvݘ>.$Is\2NwX7#bvt_O~[|/4;U6Х~Ǐow=pϥx銵Gmxf".}mAR~}X_FxsuZ*a_rXm8o4 0u$ =_Fvb~j98- {$ Nq[spN\ygpE\MimS$%}5ií*n \*JuwUTz .#wEgXk]Ծ"FW"\au{H`c޸+ v^V]&knԟϊZZ7eE RU:;ljtlT׳j;5CВ% c9WBjk#hGٮ$s^Mmgϸ=f%J3S`QyW¤ ׭0e`Z>h<끥>h赾u7[>uךb՛]dW?_Gtv$`Qo__7//e른Ig7v+^^eGaSaɜ/9WġIWIlcq<&Gz*=PP=RSqRfit.M`x(B@MhH[PZyӞy.noEhl]_Ҥs/'}"HLJ-C0,6j, 2hh)fz4 zھ*) Bu%scwYuB6=jТg8l:Ǐ8y4l;q@{EGޔQ`ҒG-@:d)1a| B=^_D{Ʋ1 Y;,UP%Y6)j!Y: m ]CzG-M!w1h!K1ы2,%P%r12Y̥Q EBYFlH/N>/VGG/HCzڇaQY1sP Ť8_W1^>9f uʓ~?^'A iq`~zO)Ώ\tM`v]`fhtl[?ۍ`<zYo,#yaD0Twd Ǔ?,OAcφܼQ}M6tJXY&u^`]> 9ޏurQM:b w'?ON~>w'\ؓqDO|& N _wʶCK ۼa\ڼqdG]øk/?ӏٜyuIUXIljWHl2U%}\\oWR®]IX7ꆶ z6͎e;tik!م\~h.lk`^t5\f2q=-hF\ַO|[vyoi0V6GMY$WFkt ;2 ɦs LZIG̩wd epinS Q|AYxpA)`0vY_({uozwxǸ;}]nklx H݁di߱51X L<'m}ixKbLc}<(U b2A_+@sU0z6w{w3OŗC3ORuwWIKۂ\8g +x(`( !QHou`Am 0&Yɩõ>hph }H*f!r+n5'(kj9*fm(F%9 Bg.z+t$JƩ^k옴Sڸaf;JaIu褊N6N|ANvI(QpdzKrՁn= eO\ /˖q/i5{!I;Ɩaazzk6^[ſMhBddcmB^#^pɂ>P(5쒧"3҈I{ Ip<&rB)qGd$xQjk.-0~akY\Ts&eH\0e)Ijͣ"A|PhbZ@_/u[Jw,r$OrER{ͳ `֙Hib2ILXm]MBZn-KRD84џXĬ }v*qS-?~W7mLdyr&HDA6`dEلAel1IdZXl;:o[{u} pZ-]ܜ W/^hBq iҁE@Ki@#dGe}g#u4|pSє,dL V+]ʌ[0қ!HGAH1{ sI3(,HH?9!H -Ĺ\uH9ans RS=t3x1bY:8hߥ׷](qƚi+: gB[e(R1mqN C!RJ, H ,=@:3KR[*Ji I @+pQlĭsPע,{5j:60e~|ҭ.N跈9A_30\|n&>}S1NbѸf(8Cd*A$(T{Ho+ƻ@obnHCy`ܽ8bVn}|2 m=IaBJ!4T|k!>(T@2U9R@(iVH;&uxLxL'V߼!6Q; 0%]rtRxc#䃑-|BY½IEAG<& be:C* y~"Pl00ƈs7F$]myŅ?--U[z>7#D`o!RIj=G1_bRT+'=OsGUfU-Ua]}xm+ knFʔJ<VIvu튝sT*<-)ᐖk̃IhihI L@wZ(,\Ğ f9EŀN% Z)-\Naĕއv8%i%OqJQ/` S\-&4`̙Rm#cklͿ5wB2 wsRۦČW7˝s WU A~&5E5!*~bo@p( (LS=%*tC2'ZK(IIta`!#a:0-j{-qnGl;%s_P5Vjw vcj)8ZF)o99r @ @:0r!c20Cv4H$('8Fc}qҘ5qnҨo|0 "wED2";DܤXD0` V9ͪ-сKFXŪ$DB2g($@7HL8cSJp0%X҄Ij$fLжs3&g^\,zpZgkZrG\d-'hd7$U4y9XG09cn3Y m}IXےvIxxǍ@h*>p^Α6:VnNpCy?R1=p:J.o&>! {J3FʟOUq"O+2Yå_h~+s_`< fp?q5Vc[i_$ (E)@$KߒqVҁqO>}zǵ \Gp!ѐCL pZ]u<\_W<ǏꞄ{sLUbCvvե'%ɪ{W%6_мnV0<3ME=ϲ*ft61EY NȠc)Qژ}&sQS>$]3)kYdn 0@gr9l=p͢(wD1s57 ;=]]gyJ;ɷ۠=JI#%'ךKJ*9g-ج@m;$;/<&c,1M hX%!,z̝vX-"ro06R0#E ۶gt.S(P\jϪ-|{Q~ާy!4N&6b+l֛U1lHLg^?y5XK;yv>W}^/ǹ`1X>mI<@R kY"MG>6)km-ͼPvџea9V*N/+75VfI!~֑u#+;ʛ.շ,d_ `ggaM\/}(5li٫W_߇}$MzvS]2]t-tZ W]טFSr" /-oy\XXnI4$8v($HT0.߅ik8 &uD9hkc.aLvF@k ImʖƱ@sBH95Xpm׽;y.xy&2H r$0j"J;If0;B@^ɚ(x.PW~|'"RIzVRѸN.JY19.&[, t:Nӵp \<`r8U4$nJ疃 "Êv@f'Y-ބ0[[L@:U@|>J_rʗ-) !9WO83އh@=`OZ,)ɤHQ _uw$2]3C?FWN=J!GluOH< ljƮ[%Usr~)whg=-V%ozY[V^`BȍKw&5NW0܆}d7tyUzsНJe6⃾ܛeP2MԽj^:mJnևhfE Ϻ[*&f5JK_[Ti~StsK|eSsr[i{x+mƝխkxzp8l Иz3oQn9f9򜂙=QJ\w㾞2FFb> ;oG` :{1FRд'D&X*⑅H>HԔ1тFQ4`pny%5qOj!7ll^(C4̦gU%5+&U؀)] qAW@-0Pi 1VYEZlM^; COķ5qn7o`'= v|u&f2%e@%Jm3&YCImr=Y.wڌ0J1m}!lu>Rq͢"#BΜvF1J햽e/w^I'`QZn;>hO)[30qc a0%VG0=†Ha%&f1BަP2.Iixc3>}ƭ.PZ46/D'}Wzo,z(_}eÔQQI-a9#G.")({aX6h !G$ !))U`E^ yS H9C Pl\љEWm7#Jd$(AT o:h9Io9!F-nԷ6 ݠ_3XWŚ1-h8$2tP_DڶvXl/iׅW7oƍCs$1) V)Z$CX z"+awf`.$Z{Ow%AyrH1Ca0g?ʟ/`D.O`?~0*)H.N*N^rvadOg)-;bN` ƅ1?fH).|O+eI/=iQuuto?}ӫZU4_MFӑ _>V^~.zfN`>~}nʥ qΣ7ׯ6aQMO. yn-ge?%yu1Sr\x9^_țpL?yZ(譼tϯ SşWOFi#h?Ǽڏ?(yS&]E K}\Ռ*G9jSY\h6^H]+eyW]^:\bܴ$vK/ /ǽ-MzGƟ>53A/J=^}~U{fPLjS1urɸw}Ԯ[+kֲ OgeٙOi~Grij8KݙMNAǣarVk}V2Tf7۩f2D:!6z >+cwryz9xx=@ч埿Φj$|xSy}= {{~e";>{$k)o ([wK}u"ƽG1f :eeVkߟ\ümy3m(\g51h$~-ldj嘨R45>|TX=HZn·dX-rQIES4[tV sH4 o]c"E&uḑ-4U|Գؐhcu[=Qs dxijMsL0AET3$ @]+  AJC b~_^K }{^K-mݵ~Ow h} N;lxdBs1L +XV@/i"LZK[Ӎvrjܵqy%wC"/{:tW ezR\⁒Kp!ͻi5D4HaeHLǘ+tN "`Ǖwvv*|mC)hvA/w",[Mra ۱X^\tuL]kCVR& 1A}JyH`QE57V& '9NxQ =I\\vS?dZDh;aLJL~$.Ƈ/IKؾKRR1\%):*ՇWIZJ)uW!\|ov6{|Yaގ>y3CP8#訊brdFpm!o GEEٞs.`9O'AZTv@jah`.:(tçzz%FזjOi# $g\Kr1J/L8HCqNi½Pֹw,Zg?V FL~̒xT Elw-qIW ޶KGD>=hfȧLX&E,Y*J$ͺ H$X'3#Oț's{Wl/?AgI4%ޑ!w_|ONeRm`J?`;>|ow:FAqonZ)F(Omd%Od'lh8GGy"Gp|-qiQ?7]̔E6x}b#egOQ(Lx8aCP3oGl S ?Jiwzk?MÛkB[tq,v׺wltK3Pg[vdk>/g$Zɢ$8FI5'3Clkqƶ>lNK2 mFi.f^ ՐR"@"%!Br z#8GB"g X'_4/6ߚOdz,6!93M&| vcR<Ȅ5.E{;R2,J@vN /P4AZelb KX[ aAEEg `0O4ߚvÊ#l|\4 (PyU5tJ|Xɠ-]^m {AhcnMshXLĝd] Dqk a0ygXHnvg%aH QDj;8ሤldg&1:TkZ'w/CAQUY\ksK'&n7~ \" dlP& $~ BHx%ټ$X(% jOHbPj*3}or5!j,$`O/rdWl1wB!ՠPwjG`(c)(| . CH8PeD&T4"͓l47g)*Z]^eIC7a?%0ֲ$8TR 3D=Ă*U r 6|6@@ 7P;+E'7DCB!Ң=K&3%dDBYg_}TQFyW]m5FR{FU]% ѷPdgЬ`3TOn$$*EjDdJ([ e 1P!I Ұ.CCՊؤƈ)) "τA{';thz".M7YV&e$''cE5#&ZB}. v ]g ݜRzW ީCol L0c63ef> :pa08C/Б]V%Kjk%.Ue lâc:JrB< #/Z,J芸P^ 4I6iy],C=h d|,EѲ< =K*`KDUW Y-'[ߤx$Df6q[EUUSYߙ%72b͠acہjFF#AtDa.CNv~׺=y'0cͦYL*f,BȈ !A]Xjm>B^,Z@mDebuj W5M pO,!) 1E`E$;c)h BV uPt ˈ)+!$-e^T3rA`X?6JHÙPȢ tdA,+a6(I@B4# 5\ P?AjDP˙H(Fw<\ZTT4;,⬒)خh{j MZY!]gѝC4M1U#ygA Zm'R4-ˣnxfTσ`!-3!-ZHn|Q}AώO[Ŧ$ΒKFWp7ݮηπ^_^; R݃ 9'Ov 66rDp FZ}$bZeY:ZZg%ef+(΋Eʚ%B{Am1fy(g[gз]TfY\6j7LJedÄKP./(72f8g/:)Z2UjPۊ0Z sA)a˨zR qh V7 XWHԇ-9i upp#,;1R)6Dyǰa‰J'IYPcdT1()Fr&z1 U @HS]qf$_Qc@sAoa&O 5֢Y kP-Y9cm!1@ Wjk>'е%=E_v *>]NoÑwfso{Q d w?ݘiM('B9dr:z$:E;u@JPR:(u@JPR:(u@JPR:(u@JPR:(u@JPR:(u@JPR:N횜@z@j@@;(_٭Nr"H@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': tNkr0 1 @@INSt)N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': 3rX(֚8s'{N sW脜@(H@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': t:Ns4ZvC?@f]|qtH0>"G+-c7. JVI~3Qt#p&BWӕ$tut5tz.] `^-\Z ] tpAI*˞5+NƮ]}̱ӕbODW >2Ǘ.7z d^ǡG#*)]=u3֮^|Qp6{tY+'"i5tA ֆxtT]"]y5++y5t%pjAAl IWDWrN\t+V֛tt%(S:Awcȸ{U] |Eڻñ) *^ׯwwy_]~PMmWوu1~DLG@/۸`#~h27W?}nۍȷ' M~w~s$斚5o;Gz:s1jvfoELОͷ{wozom Gob~K;F =਴1+,X{fZ`jTuYrǮ2ePq*#h[] ༞'d_ ]9;] ʽJWCWQ.hYlVNW)]"]% ؎] \ZeHW5]1n+nT^]uAOCW >Beqzt8#в1ֿ ]=JWOzˎ)p6+ Ƭc+AUt dGVo=vBWӕdtut9dWDWy=t%pc\ ]-pt%(\fo(~/|eA kVס(>@U:dVpn.>{\^t/Z⢞)]^ͻ{`4PI{ՑW?GрmP f oA9_φ zM.yVmU*AWc_ޛdmr#h/o*R trYbk~:9ZA7kL#WG[}JoiX.iez|ћ3+j.=u!lvg\X 8&&Hn9 T q bbE푍ȡ_98]sj. 0"d>wnIQ ᢇ%T8 yӒKK !dA,Q ƑʬI>ݗY|yȬ`"w+C̜aΠE&ZXV]:~.7!ѕ^δ=gߩU:K^ E+Nd9@+qn.Ҩ0^g{uycܼoVYg& Jh~ oF7͋y}tHzos^6eZˑF〢RY+dtV$I\Dj\ _ yih=E_i7$bbuz#[=ChT-@euw9]haQ٩BufБ{e|C|C8?bAIZgo|_]MΏne!Gt|i zsr54C2m38r RR6z[6,›t8|rGo6kB?~ocغti>5b=Nei *OcOyjQkr:D`jVj-Բ'Vhygg~wU[ƭ ~#Q䣥֏D`7N1띺f2g 37OK̝O|1": IM ] BV& T),VJ\Îژ:5|H YL'Uz:jΠy9+ G0ࢿ#F]a%ll2 uKw;*x J^}#`eWmY!)bnR  N9ťoW12#XP1g&H5*aac/PXxX(&f!`vzA]lqQrXlY OAQ~&5E5!Q^17\ bGQKGFݐ= JlR!ph/#vHHL<9~є:ێG~%<n:U-fQKj-\#z12FdNEBi6P,Q<\HA 3  I QpXA06nJn"#Jьd3k,t^O+ߺM}Cm@z`%8)iZsg$Y#C03*MQI0 {<6Q&nW4: Ң=I&mmxVeNݬkR]/>X`K}9?9ⴅͼU.cZH3Ez;u1uƒ?{<+B-8)!Mq1a 1'S% *Ю }&vh '$ZH`Dvb=*v* i/w`+ /v6+v^O+[`xO0eE5ȻETQa ]7 N7ɇXY.d,phH#4 $>ϯMZo&2cE8M8:K`SEd'ڏF+qfwVyHs'7lN:+VAAtpa$\\JݑtPbG N 'lu?vm3Ӳ?+ٞPKYl?);SqVi$0md 4ՀӘG"T 6͝QFGF刊' jCM/jLg}~!\;Zj#mbJn+"K-n~OCoP9i 4ZA192b̜YMƑ?`).K/Z4(kou'aO7/P˚qSGT}YQ2V0̀3R  a0pWz KL4be.Tq}ՠtH{ͱ1ofe.m'UD#q>pSkφE%|X@q0ʜ# btPxIT" i[DDH\!jGaX6h̉41"!P;d֔TP"%BV/l#mnNI&1HKJڬQJ+xWf--'<ʘ615N% B,(BlB97 -4<z6BO<(iKTү:Zi 2p M}A^iItݸaډu}d8z)>F5J @"{NR A]4Jn1X?E.^ߘ/'[W-h8$2K;#ݭ|l/5ɁB7EzEw!@{s`1V)ڴ>ѳc+aȷa0Ӟ)x|1wS hé7izv5υ1"H,PvrK.`XKtV3rpиЭ{_$87iOO>Oܘ:9o^O~7׳o]7z_k0d5ïq9v\<, ELeyd~&,宻riwq[=I,z+~=_p :&ŋʊz1my^NK`3׿f^M8ʰF*O]_ K_-DbtB޵6BKNK n2ݳ` Ȓ#sA&EY-Y)s6xA˟=# Ts?U~]?ܠyQѢn߹ h\ƪm<;3f\ 8=*p6ٍ'˓ϯD֭KT`z5z\oK5Lbd)=3Цt@iy%v+a:+a:+a:Oh:OGs5x{4wI=QNQ#R$菲ۤ)ieeaH,3j5Gcp卧:c\k\ "8O`9d$SͅgJ/w)l^]\6܎.^ 9!tRw1{n?x8'fe۬oǾbzz?D_ {6!i]Vji X !Ajr9L-Y="+P%Z.Yу'te1rMF Y\PnCZkaǃ hZ5ET .hCL$]T7 4`2 YJh؊\ݷ0wuY0T b8 ?jp gT!V[GEq8ȕ.U~7}?VedK\gϗaGᔅÖwFl>PX`Fk贉)!՞ j/ꪪz(| 8*6*H9w%X#KcC*M3oG;:\Xs(GYӮ]Kk8Z4=҂7[olV!l^?6zMaJ',TjMIF=͞+ y^=mB0R*He0ugۭv[HנYk]f~kz5D9S[<F;AZR0@P(%2bFXr{r1^gաy|&9 6lz]=0(پ ?:LB{, K D6z!RjI%)[קT>6wFS+\ۭ+>[7a/1! 31'z'4kz2r`zO`_\I+Y~rB?T?ɉKs'1,%l T5z0 uP,4_FG drŽ.љSt/ Qrv_K-J8£Dxr@iUqTR*?_F荡(~Sl*-trrS5Y]䶔G~ͷ#3lhkh>d8PzE}zUȻo]a(E\/Qfl#h8z &l@Uao,NC+#G1[h=ub]Y!Wuճ 6@:Aج|cϗFPS1=Ÿ跻jE%ΉÅ\tï}s;}Sۿ=} Z?\Ίh ,/O MxIk 7?zfDw#1}r)\, f8 8cA شa&#h-Q;ШZ=bx\ؠX-@ mfA tGrV;@PfM }YT-'A{:[tC9%&rhήU϶;;g̠d}C{`U Rf +eWCa*uV,Y)_"+%Jial*sQYR%”g6Jf`RxFLqMD$4BhS)J;㾁z5h)& ^_G3/oJlKH6g'4W5J]q6 &G8Jc.(]Զtb8iɟBQV0I%O~Y Xg|8 `0NK;@%B5*,S>j!ۍgl^(Ekn'O|DhdX|>_DIiXi8hOph-5^jRَbtz| -$"B$mJ"$Xi-%KUiCEƠL!"cFߖ s֠4jZEN Ui=5Y }mZŎ;ٳ8mA?^$YVzXC <8.q MrD Z5H`lID!,G vs5Ko`1Z72 Ύ׷O:..ݑizo}=ٿ Lg0Qns M %'41Ȁ޹3Gs,ϾNV(eL)E1&]BЍy ]NH:Vʅ-xB{O*Nz2J&0V@bKS=BK @:? k/V`|Z_~=+M=wt 솟OU%Kr#gR ZҐJ,FbgmC~U[FcI !< $AjF.h>b1ibp>O  xIol@I 1jM8h-l Zal \|(sN{N 'C͊&q  \9DRɠgBh.WJwVJ Dz5-4)1%)N$Nf,aZ'܃P9i4Iz|1;'.Qt &ΈsSʭU5ƒi0q륁uـ*B FDRePb:K22kCklk"o{,X~A[X>p*nYar@Ka@,FeH(j[ZEFkNh}0/DrE߸k*$<UK4Г$( ༎+Gch/1צ=쎷< *63" -C@J 4(|""Xu0iMQ@tj1kBDkIit KZ@Z Mt5TI-PlK'4)g"NBh* *MS~bt<ޒR{f 01qo,HLLN Nqz"?H\ZD83.K{lmt- UԚLO`N.; ,rgq9B,]'F)%=! $FCBEݺ U,mU +᫧G'!{VWدzlOqRtK]G+ګm_=? us  uuTWW/P]1_B%g*uu4WW/P]qj RWY`s83B2uuWW/P]CtH]ȃQWY\E]eiwtuzJ ъ0wN^L(y?2G(, nna`FO yY2?9ܸ ˕h *U8C^⧜˅0w*S']lK8J;Ў%.И{vͺ[cQ><ľ(]uA E ->)#[3mkzWo⫇[aC9 d?%jy]!ҩƔ^t\ ~}}Qlbsa[=i5nBv\riN:(sY_Բrpbֲe wFӴ};FՔ|={..V,<[oXx_w'\`~}Ţ?di\}7_ЈLn%ə,$Y2(@W~s؞1 X024@Kέ]W{ʼu#*HWgq 7QteI֮+oϋ .PW!t ]pate9+^^˦ GL 02\FWu}ڞ ]\9ݷO킘~ۋr(h(qb]'w\]MqʻG?U|sE5mF{hMt7U\@G7,(w*#eNnk8{, yYQ%f"AҕKFWhueyCvRGo7L Q/וQuDtC߀FWalě.PWYra ]88w 7(2Z kוQmїĦN}Nv88>ip׮3=HeVw dWh ]X=+q]6 ]W!l@]$t`U7\EWF˫וQ>B 02\JhCXuIFҕKFW<6 6]]ÇTE J;DjQ sWx֠I֠a ZԲ}o-c5g>H_<zpΓ;"c jBAǶ-(w&#+p\6s'v7R%$xte2ܨ3׮+2Kԕf J4 +v]ʦ U#*۵q=02Z^"s\\6]@WĦkB^"%w/e^]-3eW(vZ'/oznӓKne< 7(2s'R㦫 ԕ'R?Rv02Gѕ~\WF7]]Ťi ]882s?~j 8OkוQFtube ]02FhN\FIus\-Emq@,_Ox쫜0^6ZQJFKm}V݌ LG7<;7 806s^Fx6ʼпDM+:IWѕzu~KF˫*RKΗs ? u]nQte׮+m|隝q +MϮ@Y6]}1Olx32j!]-3[Fyp*t*nznSP@XFFW+~](3m@] S88FW{:ўk2ʃ.GW][x1dpi]m(v@˦Ϣ+f8"+fQteI֮+\S57]}]1j+?$>q/ʾ\`ZE码E+>@%aj;0]&+ ȗw(6`UZhnOLpo^GҤ|tÔreGfF.pn&Wop FWt]~]Qn7.RW7ѕzEWF˫Qgq[uJQ8@~;pS&mOQ7]]t`N+nL\2JetJr$ &;s?e -sMQ&@WMO>t h]6[H9umCHpa]yŗ2@G\W o@]eu ]pN ɮue!o@]J4,>+ y]m(%n@]g<)y}c (' ||E|ilۢnFYPn@Y+ endz˳ \sM,d(jHWoFYy]mkוQ%J ʀFW Zqv]%馫 UJcHW7+~^+dute+F7̵+7J6]}1'6^;{ o/fn]-3ji32ʵ tE]HW,p5+ieusҕKFWh]|rm 6]}]Ʌte᩺ҕOh5]WFtʎpyG Wt ۩,)(LQZ#$x}9BKԻlasdp璊F?hE>G19%Q_yRn"Pv):rӔG nMnʷ5miQwVr@2` p+ЊkוQl@]% n ?s.(2ژ׮+-H]$akv<2(<@u:t;F.+)>8m;^su&4mGmy_{x7o!(_71m;uڿ"kqBW|աt=c/7*ojy_S!@{mڿ+w!X_G=$xv{gz#)RJ?8I ?_k_t_k~ٽ⇨=ݣKsU~D(~h{+;ثO T/ (~j98o|F\5_HF5 z==|m/v맅w (nl?(7SsS-!yfh%Rɔi BU,T*>ݫ$[WgwW0_]yozkb^}?r?eeiܬn%4IH{bNQAs=S4LL*RCp9m>e-9xVפ~ ڎұʦ ze?/4HyR[ck>SneԷ&҉"VQ¨s"[{.KP\Z)L121w(F:Sy*9'QZTm 9^?|X,sM>H)7?daB&rt'Z^ZB,} IDgF3 !cαJF݄yn2:&\]OyS)N^&3gkNoGLM!O9).F5-s}@4pxM8#*)Mѫb(m5ZXeX߻g e͂(u2?Sگݟ-KJ%_c@I*u,%M*=oM>Q{(YO3o>kϹjhm7,͙դu)Sf3jdS%kI}:~AG 2WWfL-O d$D] /+ONSK>f%/SS$-GaџH(t\֐5-*ꐽt%2mC-%N*"'uH]P'Yr4{ !=et|1fLx9h"llpJX cC"E厌:r @Ur-ӄ#Ȍo.G z0HD[DX2Ić%He )Pc+mم,u:OYOj˸P#ʳsjSMܥˡ ` 4 ڳºNѰy>ԐEmJoȠho(U(2_1X^H1'=%zbc .tGXGbdv̆iB׎@$L|aSK껣BsizN2ťR9a(,2fLiWwD16#UCyWuj?"C1)mLN Hx( @,:N If9V*MS0o&RУjO rjBq)w$H.!0;k8̤"'n$8lįdfL!$_WJoH:R49EQb.1(c; Y BA5N!()udjB \]ɄXե! Qf]HJ*֧^&5OzӔ}H!QuUR1kإ9IdBД I4揄A{ZlWfr>1(rꪥvND /=47vw.kW_~ >fsV5]0)ZIh|Tx/Hƻ@up1#K?plvUyRtHS Zm6HŰ1 S$cS q!˜6b11F^_0}i;F|mtZK*0EQ}/HYP1V3FAnQ;`1ӓfH:~uk=#V@v&[yY qH?3㏥n7a/]8Y+QTkmȲ0.&-A &Y,L/^I IQ{IQ%*DGƈl5SUNU]R`QD6@@F8i, z;KnCz{ @n ./k*#De[52I`B:j! UQXՎay!2!z5bcv;X6WAqX}PMC A;H-2r,2*Üe!}^8,*2L!Hƺt",UPk69i@>!kˢ9K4YdJ j@fB޼Hm@*gr]2+EXn i濽0390tȾیON:K `. \S@`v y5nu:45W$nv>NʚsMLUd PMBV  Ihd*"ltL{PPv`)jLuZ4kHhS@r86Xh4v Ƹ8{C{7߫m>sW wàDyxBrH9]nxВCqD[AY.1eLv )0HH:P#fi Rn {)"#8tec4X!C|^>o$bʅs" ٚ)o1! Qѭ bS! !jZcx*Gd'Zzր U le?d{=ZMfҁCXm(14q xЭ_6D< \doks5fC1IpUlU  RL#Y 5Y8&d$"vB%=)r -Y\cy]jFAz3q1TjWzNot IJoQ| ٤142Br6mhnD*ϼml2f7~7E ^Obn0Mb 26i0JҌr,FuCj7k1e N tXχ\?|!Ok_mӋkI|2WH j懛x:,al~5IuwsZn*Ba~E_鶨m WmM7;H3E?vwѬM&+&nw9X,o0==lXޚыc=lՐ'Rjp~zUHey\ި 5zUFE)#T1-ȹ7tU:ZաUE)h1i]zNT7+Z*ʍ󉮎<3W+d]/tUѪ\QZNt+Ȯ'/t`np QvC^H]؂߁?iV\)ۧrsn>8fDijءq{yVQެU^ ѳ#|VaOVX쳩peo誢p]QZ>FZo{tUm_誢u ( ':Frާmὡ UE ϧv1%]y#Ӷ@6?UQW9gWKFttq] 3ƿ);;}SwDBPK5{<]-%zJs˟o#Nw–jvtUQrMtut%O ]U:]U]!]I=+/mi}VCH#+e־GtU;p ]a骢J[߯vPžS1AV^~.˽4M?kyval_G^HgןyP=\rKCpe7%qy)ztM˛~o立m]} o%o+pۋ7mZ8_W;;hC?n/Ja~ϚKŤU8 ?Hw]#Y= xcyJPw}ѵ+u߬/U :"K475z Cqy ぷ2*0˜Tj^Gns!OWk8jVfCMp)\+yy2h2c8c]\vgecRKӋq2i52,p_0䪲 d L0[a/ɻd뼴%MF?va7͗x܏naXUmya[ݾ w_N֏ ·aq.2N!1N%AOo;*;'e*!mκG B7Vf2^6bvq.b6]m\ qMͫ!x.7meҩd'0^j4E*>x6txN|>n54h-f\1| t(l8OO' [%OQUN/s6-.l^me_5K3 7f+yUK̽aWS$/ -gie{nJK:l~Pv'sgd]bݩn0qNG:M}ʬQ>ۃV (sҒ&i3Ģ$-!-4Խ9fO[Gcܨ٨O2geZU[D-k*"\Vpꑥ,ƽ ?T~`'Iǰ_Zմzq=M.5EP{FdS--ɜ{dηi,kv h]{A$BIʽ+ Vm2B@db6눿1 IT2Ze*G렽+URB)F1䨸9=Wp -m桍u46 g78|P?;gC XI̬.$d%w9cVAxaWr3#lShWo]tRh<=aҩKhn b>;UfgeO:9u=Ջ_f!P.6ҨE ޢ1[R]y3y z&g۰8 b&N`,67jX15C}!}%t=|o!.K]\_uYX,4si` 6Vk UI|XI55WQJܬrѡ'uGì mir[X>#_]e=<hzx')0[8;rPgh[*kav[^>_s~WEy_yɣ#ʼ/$/Yڤ Ah)kCMk.aMF?5c۞pwww7˻.9A{Nǣpkvw6h}|5onlm#_Aݗ V!v2TyT<8*UbYv4.A4dQ6TDq'\{̼6r7>LghU.|7og5ׅ{ܸ& 5ۯ|%O c1_|7jdڧd5az^9Q=/*PZg7uފXx=v/Vqs[t k]i3T݊w58^q{7z߂'.0HnLș(yNLԞ(y:O1xܱ{K812RQPy="sXa혐1w1piZF,rH# be}0z)#"b1h#DDeO\k.Ўָ|2>W@4P'm.n6aivhSw>7E9fMm[wR0SOJl)] ]8瀫TY9 PFU vh5Z`BkM)CsgQ9b(;A UtȴLs7oճ:N{l W-}u.qMQvEgY>e)5bKXG bK:rcOGNPu&ґtdV;n}Zuno%%>#vo/wcڃ|R>VS (-q~7X7}N6Xx:bwW9!&+*r!32gE0X+%^R#G.B# GH\111o1@hcƜHcY8HpY5$ HI2,Yug2DOErR;^p NJ7.@#- 4J@Ƶ8w39\sӖhbynlw6Cvma:/arbQ k΃i9SQ2` `P΍g!HCmxag%~̯?9Eͪ:ُ1:.XA^L^zȫp S?`x}kw?0o28d<h{$kn kr;2)qoqCN1iQզگ@q-Ϛ"3'8]<'O1{f"7G_*Sȣ=tT+w=L[.4swZ#LP5AJ[6h9{`ZNc~[͘Y&UdEy)>wu9mxW:ѡޣjj nOs /3ə0a^SfH" @qt&Kg2D)g츢ZzuB7x?=<A&:wDJC3AR'`6k`ӎx¼0}k4?nlў4[9R'סiQ/7L[}7"G};f[gé RLqHbH"J!i,S@I|޷Q& %%c8\0PҠmPnMJ Lc,-gT8I1zsFx^\ #2yCGtr)YyRؚ"|*ousэ&{z!/Or}R"( tbHEGD N`SʅȈAĎr[#Qy+ىTh&T*-Q`@w1ogN1J1W"j2- c+! JB"Xjs*4QAJ(hyTR asM[ٓ zYsQSD VB,ZB:OGp DrPHMxŨaNp(0P-SY O*}=)qR$t&l")]??0#⁛ՙɹM^Ơ1X>N%<@rFϵ,wJÃ&.rzK;ZKߨĥ0D!͍i9d[FKSdP0(+z ^ w[,} *ۨJ~jRlTn{B7櫋*(C_0t$a"9K ؿJ˲ͫa >ޔM|6ތW+d Kc.Ti7JXG"\JlB)'2 ǨU1kQ+µfAkYۡUmͰǔ'@lIc٘,@|b^By@&.LJ P&FΒ8tN)@)9ŰÁ,uClmVHFǤ3(fEwZI%PD0@AP(@2qwףS@T(B>N5WxCWA7gdر *!&iS4( #³AvZauˈڝB#bE@Qz'\D80)1rrT0:Bj \W I5𿀵B2rÃ{0 t\JEi]71.|~ eڭRa'T_0FSo$!,]5;i̶-36i6wuO=(?S؍I4ɬ] ~PN_3K3V`$ "aHozRP,\uo6x CWJ``Ut-5XK %6r h1FEBa.g&;u&\g3X6, >U\_Ь8 ps{%;Xn=J%dykȉ azw*a4۸jtVO5Z(8b~Qx!YƟ\wQ/ʽzkΑl4G.h0zjXI!ҸWlZ4/SGܺ:*a4.`$ǣuѳ՚|1޸*AWoiզ{儰Û%zi$-}1450lĮNpgۓ cR8g9`T`rŹH#Yc]3uX{޼o~{YqMpVϷA3wխn~k2}Z!ɭ]Oz7/>y}_˜=HgkBi=W(p0Oubǣmjm,#[2`.K+2* tN%}^Pmb~y°x/Z^i]FFj#?q)JsV׊׀ "-0r^5XM 9>O6C;֔7ikBx햤B,z1hV)&Ys:(:@u d iO{:ۜx$[3 u8u`BZ$[0j6\Bwr%F;yv~L~34BdieIJECer$gYsJ %x|*d]jV{pn^iMH9 aZ_prb ~e)U߮&CE;o˳iYSj8iG{Ҏĺ#Ԏ?u+H0#gbӗ|؃sKPDA]uPUE;IAzjآEr}ob_[93h9`G͞5Nht%]TJU5aw*Nn:w-fy۫u~]\X*R[[BXEk$gU`Ys*CP1 iPMXK9/Ue-F#h֧gŒP= qop3k07)[xkv>]%mp;b>jѽͰWVͲnl=ٮ.!! ۊ3tNVn!Y9wZ+T^CYNt:tY&|uGf}uw$n X>LkC &'Wb[Qc$FdgއGPEآjA|wWE\m\4', .9)"aPeV!kK#P&cK͜t0 `/8͉8j}^ {2dtQ|?yk_Ջ-{UBmX׎UɝJJQSotM3 Lv׶ PZTjEAs B xOrrƃWYUͥ)ΨL6 g78׫}-LJi)gsbb)rZe2d J֬՘¨R6dErV!B*8% 2Iʐ-kJ 2 NWq}ٔ2u;A9ڹ"$2 V,ڤʬFP`T,Y"*j7[~ՙ_&7XlYR>sh$ߗr_~-t>.;ΪL?8,'ǟZCߺ[C>#v`B.}=lZ4/%n ^ힶ&2'OAp+WfqS{Kǒ++`3 ʼv]:h,QJ(*,G9ieKa|͹ƨZ_"GԪ<)%Q"s93Ak I͌AGoJo\̸+R\'.W. i81b) $S3czƎGy k6;4~3M05*mYɬoߝH~%1<&Vd&Fm1" I~KW|HFE3(Vt2 "$Dq!*rĨ.>78¨_h8Fm~ܕgF'F֌7Pjމ ƪ)Pŀe@aMDJfhsMb9jIGE:PKL7RKЛ5ֆgF zQd[ א|͒;"jʼn݂  fv!TJ[Ak] F͠tx>xq8 ZzwC|vvw?Oy7p>VaFҾA`m{ksaP^\ytcArStH9EsZ <ANp 3Zb5mXwꪽv-SbUکr/O1tH9WI^Irpv27]Dc*yՖ.%hbJlJ e"q"+tZ="[tK%WIݬ.hZZ]N͞plof/%uucTClDž|g(y4SV.!UlMcTٵ.6mӋ-E̢i^~U+F16$ádU"WԮbu#訙D6Ծu\# Cwz@❓)D[lo9w-5/wH4UiT5V>&sO*&ye݈` j4N\R嵐;i(:MH&(?]lM m% ޖ̐00,:t;=pX {V׏򾜞ȣu9rԥO-ǯ;.~by:-" j$(|7;AZE-%`G*MMW8{S]-r3w-v- Nl5O||*FD1k2B n#K8.#{~=NE:+0E\\kG:q,. 8Gd4ގFipԓв"(gQbq.,cﳋq«-⅏޾ވ\Dj.P֣Δ}izJNOfy28Nx,Y[ ?)KzL$=nphhe5tE] `5ƍNܝJ'z4t7zxZuw`ܷvu;NZ'v(EWp :ZUh%7ZUC z'HW-0hW UCK+i&ztev0jpp;NWn80&z:t5w%h4th|W <]5~Ү"]QB@YoòNc>j֑5m~bNx@m}Nv'ø%Zl]X;Lh#wl ;sY`Ece`mhpq4 ~ٻ6r$W}Z;a7-,&7ZHr6dcKNz#؍j=*>"YUyj+jԟ,{ܜRQm`<`]M#\Y*4MhE[΄RLi+:!"΁>ul9[PUv@ZN2 'BWVPLW]+0' "\S+B+B+RrU`Î}( .?r 0ڻ C)#d]LWOU=WZ]`ǒ+צBWV2;]J:HWBYm!!BF]c Ck+Bdҕ?gK)$Gj cL!b+j3]u0VQ%DWئ]\Щ,z"<{W]+Y:#Lar[0ɖ4^adΦISFc; RdP:mNK=XH 5h=!}6XkSsb2PlFO&6#:{c3BiA]c3-)| +X2tEheנe#{=Uw vR1*Գ2%_z1 ~ˆvtrø@|~/\C;:? GMxT{Kg? nv X." z2Ez|7VWA'&ym%Rfe!"[s<##oE-[ˎL櫞%N ھҼzvq '*xɏ:֏15:U.^ ãx_Vt 59]Gc ׋_VvojZLkwH O&`evxkFJ0[렷+=l . .$sijb+2]+gTZ{a@2tEpm2u*vBeЕjz5HFPp! 9v0ȷ@PJЕtTs 6,"BWTtE(>`Еd< dd+BňLW+s)>D]5HWɥӟJC9%T!櫍 8 RYm ~%yj+i\?sʜj3`V 7j/aK霐܃%݊ m4M(f M[UC:^%VDBF(㙮CW)m9`P=v; E]ƩcЕFڔNn:{vv,_|>t[^뎟ԱG) C t3]=U\p#TBtET:;]!JTҕ:H0=7 .$]!Zwvza(d ]Ia g =v\}tE(]HWJ2&]Bt9w""*zPLW]+T_$Y FغHL >Eq. R'o_LwZE#J?U"&;Lp%KЊ9Fb"o NNC&Muc _{Jt C:נ4,oBVRiJ5(|7xWtEpM2 -DUeLWݡ+P rH޻"*_"]9 ]J{v$K50622te2]=U\9 )++M*tEhUtE(5d ] -Q|nwt ]Zc+B[qLW+:g]Q{冥BWV2;]J.ҕTDW^tWt40L0hTg ]ܸdI (^0ڒP'`qݍdrF*+67h pauB^"/@? {et0FJ0 V$CWWT;۞S:HWhR+lLKƻBE]|BI]BtE^7 .$s]:3&UYDJ6&sYt(5 ]ٖW홾GW!%nUZs, B VEW6l[åIIU;]Jg2]uRv0:+B HWXEUbV4h:™F]nBtr+AdÛ߯P1N'Q=.'b4~g8)T??>ŬmhŲr~BWt>?+mo77h<3y+͚x.^5-/8l|~Lt_w=ߡߡ/g//yRF*k\]g0P)?E]wގwJc@[XClJsXOQNO9)p^u|1!H41<| 8XXx˘nv$S3'q$FḽX/h5:+yjny6VC m<԰JJ j5w#"-0JاL $nݝ{2慍Z`N?fU7<2r"D_[H{ u 5njTrc5@5/54/uuM߯|l:y otr*GwQ~/\m.ʫcHc9yE 4)Ub̆:X~xC B )roݾKڃN=WEYU|=^ͦգv:|K~85֋'~|WEqU.>SwwI*>OznWɴC~3Ju"5d?(h|}]Tea' v=sqsH.4Rơ :15LjQ܀}NΌ е^1V`.ڊA8A~^_.r} 5WbPURIU9ޛۓr}G}Hy4|[5}$Dڛ32r]Ks#jpb>N:&>t&td6Ox%fZ'OwZZpjNꛂОeJ+]q6ݜ`\{żjz\+%/jƅ+zp2h1z BKm>me&ɰ /_q:n^PzU0';](=\g ?ݍwƠ {/6CP%ZUndQ*M6ޣ jX3J 5&L1 7\cVSZfc攋S 7v[؝t-pYIIw?l*wXn5M(8,%增D aΧT2s]K!X{uP6S_Ƿ}y_ CVUM_,_+Ŀ)%j ;+ؑ+k)L)x*nlY: =lh֢x>CR8X9.-:rFHfpTUe=iFWjH(bhS\&ZFO^uR&d٦U6&dј*nrAf "ir-4O}$i*ֈK~oYYa |4d8pu=tz kt_)TCFmlS0D [I?п97S@/e}xSlv~|Ck|3 \٫ _HVlTh*ÅZΆnTʱxv'LOW"oizSغJм\oȥyjZx%mS %ɾgO1ђRL&\-M f&b$[Zg 45jD˖dBr͘L&AmΚM} R24d%?[z$d2Jg-W# FT<>r*z$;x|AUeͦђNU &/jZUYMK;5:&ږk}%q22u"HLфmI&^i򢗢odmgN: 6pdW%6q^>ki>=0aQ @Bd|!dֿ0BڈQ RPՙ^p[f"q6P>몜GʦAhl_uFWϰo'6rH?| "@F5N(K$Fhu_нF 7-5OOoe wQdz懎z]9Nj-^O>5=?~lvCnxXPj< cYSVz,Z߼\#Z.) W%j%=Het˲JJα$ >f= QroNO@tO}(;8H`B+h``)bcꃤI3>H.Be )%`tyb챃!9%'_[&^`h`"Tpi5D Rb),HX+5Uc*,Ր *^C*㢌lۜ V8s!M"pN O"2gDWYNЖmB UtI0{SlP9 lv6(Kin01Ÿau |~pJL*|CmWq2~a>24kTR5 ݍaE1w TWoo^LolWh5] L>/%=H I)N=3" y|5{QSJOL.u- ;8A&IrDRf88O#$YJN6I3\{|B"d\jc,Ka8&s S2A^?,1BZ*ТxvJ >Ars0Fv*t w$hQMSQfrAKa="1l2كUeY6N]`aR0OSMArBP8k3~GٗP~:>DdOP8+ ,l9J O ,P&5>]5gm'zVF`e%鯡\z9k#a` :ueIͤkKn|X-{rKa&7|:LO"]TC@&Kγ'؟˂R,*q=u;~Ŕ/z|uhQz7ӑ{6{޳Ul: Hk5 8_a`]?HH}]:E| W?@BJ!sFs:a-BdĪO$0d\yM_(j ~QFF)UaQ%h{H)`FD Ymh[x+#eQ[P-_ny ;՘nJTav\gt+/3=Ӏ]}2c i~I#~[H qi])K2RMF T!5¿~QB:ɝ'f|;q@К SCu;_k>])cdMx Ƅ7i$הhsL}8~O&s(goOMU Oo{~ n57 FtO[t!|J[Ҵlh|sjo m["1t%7Vq-l̳O1F(ԳÏ|52KoFnbHJJk WH< !^5mA?ª3̊틉-BT[u`Q]o(p(-QL0˓={lK>yiyc MMD6U-mEzZLX~X^֜__Ga#]>a)LyF?OVJuƳ~SA8nx6{mLW#ݔ†/CKHSF& FR#a&=W S+X'=Y>܍F4k?8$JX>oўs p߁#Ig- F .x+Æw9 [^y$Jj4<,>e@`=圢Z;ŠɵG2;ق92R4RxP22kO?Tz\Wbp? 9r!}>QJU_:j+8tbwyrg_v~{^x!̈́IAΣ?Ýn`_ (1%J|Mdyê$={'rZI䜧1]G7%F]iލ$-OCe*OK*MǓSim'ƫP-|ODE 4ȚacP0/wG-Z?_a6|Ɂ~(VwonA5.= $.qw 5L`-[6*aPftz*M /ᗆ>ɇ"&4gT}qTTrpSfg6Ec%.Og g^sZ7 J/pCa.퀋M]5gwtrw38;{[ nR^ƾlqxs8u/΍ 惟`?  [8.oqTcClpwMl U$WgWi D]+*`az T*^zuvUF`4ҟxK ; tpoo^v+Ճ_-؁̲8L7yU`SRlA^ZZ1謬nĭJ/c&EĴYw`-n+N'zܕy(-8=?'ӭR8l% SұEtjuL#ÖɗkCg : ئ=[ 3ay`tW|Ը`<{Y'hk1mt]Sc3 X}#*kD+MPʑY š얹5^O9|\Yחv((UW&;4 ZoqYEeKEas3H[eIVI*/!*j \dJue~/$HEu`Қ9E ep' Z0(Q:0Ѳ]bw'GPҩCAnޑ.MH^k G/f!ALj=<мz jOn_==vGv=vIUFb+z6%ؖyɱIXw~y:(}/1:"\|3T#8`x}6Y=W2 ,ONo(} QzZԀ֢Vo 3wIi^O!#D}_ZI")|-Fh`~J4˗`ݿ`ܚOi=.?f1Fl6.}0 (a6s L)"`7_BDfy⾭{fϟV>Jgo7Fg vnpcg7XoS{}+dX`?ٞzM` ·͆poˉGSD՘x|._9/^~m%^SENb-W}}^6duиF#DFk*=Xrͼ:KjwI tW,M+Kh- |~xxz +a_CT0Nԯn\F=;meTy!e7pr֪s캹w?@"JW%g3J-\4]!xƦbH5ٌ[5M B54KuM"7;+ݛlUMחMֿ^o0|Y| zp1 #jឃrTB][#\ǹEm19:("ԢFkXQ_`Cp%<(MJjW7EрP^FkGFbc|_0 NPBc -`F½D h'sK+נݷuW UԺCQ+UdWd\u#AҼ^c˩Ǿ+Ԅ[͉6waM6SQ|ZJ=r 4ǞCISeG+(Nh}xBtbPV ~7n.kWzc{zJB5@ma)"Ej]ϖzp;6^B[²:~ZS}͆1u\yS)t1oyc^swMG |y(DMSJ8ǯvUwAcE3}282:֬7׭ *Tj QU7 ̇ȗGLyHCQ@D< DUX|? inXI(Jz?dSlbgߧˌfU _/1mk^i[UBQ mFOE{/a ge7 6=|n3 hadV!"=𖅠C- Tm r4I/(-꯺OVQ}RUB^cNJ(;bqZ1qB2rSA9Vd\q0߳{W0BAn9gBk. {wk]Se;ॄF^(i>7)ǥ7jysv>@Bk/c4}:1'v:o3輔3Qwmt3pFnԑw.y DYдn[,,9DdU߷n?p9)ok"P9wR9^\u1v 74l/aiE0L%>$!M"g$uanhgewG&"yp681Q^j~W[R"M֪Q`PR3 ݺey ι1\^nu\lp 9h\<ĈaҸyu6_1l܂eE2z*Jm%W((fHU6ќ[#B_F?Mkg/-4MwMfo(aPt3(Do_j*O9S92Vqc%|E0I޸]|sA1z/qG ܡxdqaL UU4MZ,ר%ZǘA*r`j_0cV-egCsܓ}f6G %fb6\䳼gOOy9/~x"?a(bJ+*S֮gB,F1EUEoo!%d*sZJ4F sӧ:ީ/ SMap n̨{F0 0_fb:JlvW&5F #|Qa.TaDx˩b~WA 51p}P*cBpbR1/xSr>! UnpO/S栆EB_S[hṙ0AY_bbX;LČlvbxu6z}}Re0Z@u| +Xڱ O\oŦQGKGR GOrjOSPqW >TyIl0%XM ZD0Bje&2Ik2}^ņ'YN4N42 (s6D|p? ߝ6o8wDR42M)LXYuRbW~˰-! %Dq2%G34u69JYCH!N![(guW;)ŕpp'7x{kÅJJx[D\+KQ(OYMr9Ze]Niۋ5u A>w ByA &6x@ i'$A\ .+I+2xiЁwN<:bxB@UQ3āڕBzT f-K"`JHIskRtso4[4m=77bsƒLy%'*%egu wJs?|)P ‘^:2%K)I H" YUse!cK1\@0TZTscӁNV{h;HLprߵ~1J%Dcsje6Uh,Un..#l&*5n DBȢCpQP8x0ZLHpoXis~IPk-h)BG=#u3Q-®yкiVz,sey*`ܥ>%ߞ^&*`$TA/<<{f7Z< c`GC8n6|f )aW,Vdj0|e3_-:V홑fzqL)٦jt+j(c62 kLj L(WnH[\n]v+Ԯ,RV8VM4%:o#EٖGULT^4vra;&"Y̗."(UXW /mla2PFr*aL^G"zP^B /!Hk"p(E`h%2XqĨ L ;='k; x v4ax ex?(s"ʕE{\ bT/弊r=$3Zs pR1Jl\#Wil7dh5r'tHýdcC:7t5rXtu] Wb\6źc! W%r`>UoD%s됽?-Ǘ R%pכZtpߛ-@l%4d(%˚ Vx%4`]XGvPYG5An{Xhf`^M0. O~H)DY&j3g>' hxA>+a{v!%4z 3Rt M[O ANhay~RY 24Moܕ&ZK l!pΈ5~Bݛr1Bq^QBEgn2,3x܃i"lV b,"(ePlgtBhMb&qΦZg>T@ j,qL՘aUxnB@$aKUR/HKnV흂l6d/WB*VwΰH&&*;D"/"⺛%#;=;Q2(yWB< 6vg_dB5isn{E +8Nh~EA#Dzu\0UPNbt$;cH8m-%nC;N[)샢C}~NiLU(I)uI5qlB ,<5X3mel*6YPn>T3沮ȵZμmxK%WJx5X$sVw6-l} )|`|H&ؔ6TXpi(gĦ Ws2-dp1FCU7uv3.y-Sz<&UиaBbғ>?^AFp=ܟ'c"!J\(."b8uCb-}剝eū oOH1A3E5c걟x!lqv&XsBO(l<9!'/wW(ڕ=.ː~`/H޷Ocl`8N>f'5Ȳ<,7X^!IL1iUC AZ ԪH%{htn6 F,J;ѼjKU]Rɲ䔴'9 79*dmO;/ץ|ۉQ쫲saQ3\P,S,Y#QF4o {+At@R(ZSEÇn9*z#;kLՃlr :p@5C(L?p̽\ꔿd+h\lW zi/Q!b!>@i[SeE~.{sg]4hb@*_F' 2ô^ Q1v;{`,ٌ̐q#wsx( dtVJz5 w\ !h穩P:,Д,APaPDS*raQTZ1f47ݳ)vMЩVnevb<4˸Gk989Z&QjFfUR*I #ؠ_$Pdl%+\ i-ZɻoE,/YӻCx߶?jvA!YxXˀ^q4!x b LRڐoQ2sT o$}d`%:;x̟MLa1͸m!Z~,ĪGs ɶe!sf#G+ޮrH8ֶּvTpX}kOUuM#;t3 80-]#'λ |cuF>ݾkju_uV/TiPQh6hS{7\SuVtH:3Fş5vKc"lgWX\ࠡ>8ᕦKTRqGF޵/ EXIQF>z۷u^FJ.BX݄⚴fE۶ ?(SrhܓBj u ixdA-'9m:WUu Օ]gvUL}YЦII1ۡ7.<沢"h*X/ qf&|ُ-Bjc[^lzr["Cl"1`/ghqiuw9ӨSzCµP&&o2B z\_ʙm Uiscuma1ygLsEE <>Ux?>Rذ{~DOZ4ܪ| ֒kÈ5 [H;K.vnIGh崉;' +@~5f@cFJZ5Wߍ֪4>Ef!2#(=53K,[oAs[i_f\Y-n&s-u$j)x:@p?"vZ•i1C:]lprH su:>3̳+fk54 qKv@xl8HgXWΓÎr|<3Eoki >2(XK|]# f\,Y$k]Qǧ΂/T襳jl-UQL^$җu㞲;ia!̝TΆ,N-W`l2lfg)O%)_ t;J3ЮZWlLD`J4bkR)3mat_8)Zc.OɤC@jtm4T=IҏHXb4,K8#4DQP]`SEuVhrEÇ!!Xh=FDؗRF?, uM"kéFaN$%B-K7q,[M,gG;OB3~v$uXpT&x"$\W cW^luD6c'h$|Jd XSX(e\YjuNm!4Ⱦ;Vr] {P|7⡤b$[j8nv]ƣ:wխUJ$7 8ItZBS,H 8|+EpiTF;8R~[MKdg]_gç5:pz6j`$rmw._]GEswZ' @!A__)  ~`_0ohqMCiǛW9( 5qe_k]Csx'٫zsm}/ˎ)]K4q_/kρ]8c#T3Y%)%^͚< v{-S<n44c{ gFIIdh*@8>rmjIm QZ\A2߱b&o.q4Q#6cmXnWũ!`c08Bb$ 4ܯŐiMSe& P7UZsmz5T6/`Jϧˊ@!Oݕ׈zE/v 3au]uh(yOfl5q5HdH(V}̥g6al An.Cpl0Ox5Q{CMsMTm6zXk]uv=t`\ GJ@_:&rKqa((-ޮL/6?]h{#A08oq&&H*v_hf< x:I@ONcz7b 0} L_~2Ⱦ!vS[= K~AcҸ z~ܨ˳]1$3_gzKRܝEH֫L+ wS>){C ֤S|26:M5(DucXٷGΧegpzjB eO5?D*h +@ĻF|Go)=Y_yta 3Tr ,{X/FiO?wawsrW"Q zYsa .E_GX߷XAcHx#x:1xA0@304E] 1 0{Cw0R[{^>s-͘_dVnN YSbP7R"+b'VrEnGT:\i rŔD-wlM+Bv$w5s4.2/0'=rT V  dag Tۛ*hgv'ˮGv>LbQawUK z%wB}FǢqQgT_n^

T|{˨;Ag<]:w}):؆t,ѿw,/y5uf y+h\\H/ N€Qp(,MP3y|aJFH[VR/T{Njz{ư8t6Y2]KrE7Ru:55@*h\? V@@դaYOti4/BV#+QA 9pMmȬGy)R.:7yՈ%(^0+iSmثЫ1RzMb0Z&O$gqFw|e_`~qY3qmm?=a`; ]Χޚyd2&n(StWi< H}H며UhFL :*}ܜ_8_w8p&u/?nͿhu<{Vz_ܝ{}^p{g~`o㯧,_O/y0q巿t. DpVR Ƴ K8˃y bJ@-Y4k"tw2[Nlt ס+SiZ).bfFJb'\֊ro=#E-$Yb;2-R<ƬK-&'znV*y平e;=_rY>wbW]P7rͿq%0a>fSuFy4nU0[;7kmH !npC]>c,F(-{V_CrHpfHY$ERճFxX+0+`u<~?_ lj 1m؀WuN+'\)Α}ӋLWRs_Holo[dDx{w/ i"FDln3?ǿ!\A 2|n:z` xrƁyFoH5,&1JaP$V871}&\.IXE8& 4,L,ϕPj~- a1 l:Y6{8Z=A(Fsϊ1 pv;ƃ _~Ъ5S՞-j+RW.}ZơO>~u5RO${7zyío2̳% 6$&< `kb%0^j*{,g /9I> jvz~4%ny;~]ze"5Y! 3LxݮT153,8QBJ!sF;T6e꒽Q|L`ώ~l2֒waoOF!@@xia%D-]Q-sP&G#/'& Xdy 6 ަF`lժFl:8PE3㘓XR}0FT7L΃ڨIU2OK(}0#ZѓpDa݇gPYlbR&Z))w@xZCVZ F{RvL )4fE8/Y.;3yjW} `{;~*eE˜v (t @*S)Ny Rӓh]X[79~qޅsq4{r @N*/a~Ls&x|WHc) ;M20(4<bz*gP9Ry'YMWwYg;St*{QBeR0elklT7*%8.qr KW˃sđ>ٺ{5aɖY%"Iý+E쫡Xӟ6W@AY*b)71e&Ĕٛrl9^[!3gBnԁhfT0Ă*ϴ}ɩ!T%yMjwt>$޸uS2X,ѓO[eN9F*6>c0͑+BwZ61ךsoo%+UPEqh Б}tT7E>D.ܐ}c noUUΦ)th+Ld#b1[M)?5 #tٽ2-F {ll_竅!t Uq??^\],KQ,FoT."Yk)|c6M36sqrr,9I LL\80lsk>$MhLKҸiAz33U}ApAc%6bP}}1 xƢI3{^_tAWF1-rO MIO8vrY-5Ajd4a$BY9 ,5`|X>懣ƣH,c.q6*˱6&uXkdhKH)!zn3W(FcLJSnw dp9sy|qv}h_fӫK]r~'40mFoh308{P5pq 6wqnDRfǚG sĹe/~Yt<puYoqY0t}ذSX-bDƢ'ڦYB'=P쉅=*̏ Q`>X01JFL1\Iƃ%FQ扶 IZqO7V\{pzT,DW/|}MH`8h4Eֈd]Q'.Ʉz𫲢)>6;mP P, VF諽0XGir2<\Ń!Z#{(^YcQ-؜mZׂOz\$ =qRl mb#'m8ok{F(}dܚKU%Ҏy`(K;/+`BՁ 襴xÄec_VЗ}RϞB8,T(`Nyΐwr 22gn/SC$4E4P'CRFRόdohY=YŬǓ:8s,F9GV0#ͥBFXV6F R;cEq+aMvL(v,ID?cP`sB2i[UV&q3NҲԎҰ[tIĵ (2dVqsdBH|rR0đ%ic7ʤh[."rKiscvT(pm[`hv݆իiE+G:Y=l֢Gt;d`*ae_#U7chj]\h*[rb"yo"wOQNIvc:7 ;2t(<"N P|+^Hb˔l;U˩" $)Z9K,v(MvZ * %bo!Ӂ J)=g 0'A$j]- ,Ž3dL늗|&Leg.q12Ͳwq~FC8F4 p NZt{3\fqqrR@i7CHB0;hWH)<(>?lH}iVrZ}hBm=~/'!%{**FFcR"0X6ߡU7#=~߹v<~3i*e!J{zѧeݔA<_ۏL2:bsV t`^jqs9抹tc-ڲMZYjnƲHa*#!CmWD*AXVY [wD9M}ǜbDSQ={ɑULvoƣeٲ5n(I |D!hJT{RNHNb[r*& cޑt.z4HdQAAXaJ)iU լL̈=S@*9tTIH+siD%[xI9eBG/~kj%&T>S.NDj}Rm}%5wR $K(kF@ 1r}COu[lj?5\sB4=Oz{} j-^aAr7C=8b$J-8^H ؎Xҷ@4 "4DA1gh߂oQ-ߌꭸ݊Rp$ǫ}%TV<™BV\o} MkCP,P͉^C(#[:!.oۼlm=|iHE:XH"HMJq-9Xj>JL~(Yc(Jϊ5ZLPd'!$D?yML9q2;A׾v }4BnCCK_єSƽOk&Ti (Kv:}v[`VUg ޒϮSjP &t(tL7]Jo \ ۯ--vޕqdBNU˩M@g0@Åa*ɢ$8S)DԭfSFjY+:t0ڶS&T7BٽPԢ/$ٶ}މ_jYA]PTԺJ '@77= TQl@Q?`J7u ::wlڷPxo-dF}Џr <>PaO4 |ۢx_֢(i_Lt i* SC޻ZE1 ~N#Rfmo%]uzSq{)P[^l4:ك~ 8ET4^ VF EǙ AT?rgFG`$a hUf]6,?FJ¢eQ|yki/|^0D&>?PI}$l2Zo4ˌr-Ä opw?¯?~8KuBSYkĎ/=Z鯽w4*@}8”]qv 7#7wE)I^1+F:Qk2!s&:U $5'!sVDQM#5r@LxUyø܆L+,_*$2;'֖ b'& WQdm#*O8pC0vlB uњ94^ ܩ C4EZCwɸjSu`U@Q!O?(7M. gxffs7ϵ3=oFgȗ/􄖽No6LhLA.Ml8Dn+rDHԼ2$,+CQs40k /U P|-**B*[T *PO '67cloz$m5Y48cM[A&cǓUL@h41YBۄL EBry9D_VkBk#`?JH4=s.X֜L<.1&³d<؄s_w>,i$ljd5,j&+4RZzD,Pg,LoqRJCp\%ܽKT$ 9yIjhf߳61' OϪ^t +qzU|Q$ )8mT(JuIQ̏EZ%/u2=>9(%o|2;/l,սJ`>8i=4rN'%vٖIBko7.[zY[kɠw~qG/N RuRCc٬vRF_c^-Ѐ^%([ fK/'"Eә(PFiQo<9AC^&JIbTFi<.Mʢ#ZwW_Xp<](GtnoV`maD)t rDy _~JwVz,6rta\ipI{4ָ̹ޟ[8JcFoqD?w>r-M{,N9[J=2e6=dӣ86#>yLeҺ'fb0:5hm  $zLRC;D|;r?l҂JQǔx^D|<Fn|ٙ?0?f3C F;!Vi10̷0fߒGk(jrZ[0x4#0e/a_V.fvqU^߾Q6 N4HjnbYe˒ݪh0/GQZN* ӆ*+{=މWP9H5x'^wlȟ{s\Y04f1x'ޚw€^III+-< e(1*mA3Nc>@Xwy/lDyUQ5!T/zTɞEpt*`[Hx)M^adIa2ݒSik j+&KK\9>EP;GF9ڢdaHR^E|6X˹򆬑%xmhsgD[_Rϗ k4GGtBD35\q17E&pƚr59Y)j>GX_sMbS- d[1hD HF־}-3)^+dz!q5i (ۏa\ޫM?7_LC4]h|<)R;Y'}a~$(<-Szb3-HOK/(8> r~*yJj $nrT{T_֔:KfA+/ \j me`EP(X ㅐUQWB#PD8J j|AsY^ϏFDZ3c4-ŌN[XT%ɂ '+R$IEXrK.ƈqxV|{J Qɵ>R-քn zmv`>)[|YZf*`hRC'({L,ݖ5yX^ye3êo0''>cpTE6*q[Tf|d'=|nQ"sDVhCqJSIl̑V\ـf URE*5W;Mw(:lGK_˕-;$< WzoJ-8#.buv=uD54^q/pUXwM]BD+ U1TZpY+KJy3B-4XNsժj@UIн_i@֤;q Ь JQ383 K W\S f hWJ`q/@@+t8EQh FWGXmބcAOꀍ@fV "/[L`Eh m#0Po!aBX\ˢb(/۰>b ټu; v.,pyOZ&Ojh9'i>P ֗+:9R߄d ~Rxi rgsJ; XOs IKNViw-&ȹ&:yˁ54x6@)`}ĝNHXk؎֖{[i~r>Ąȹ*DaQsezZ_Xw/{<"ӕ1d$ϴ<&7_#j ] Z92ZLH cv?_IDAA&bDJuk!+ r~I*M</^LbRZ]PR̈m;(A9[k5a|QNd@`XB3˽aoѫua*f #˄&h~|/+w\\,q 2M>rnWh_f)n@M>Q$疱_]iMI%_Kt,7nf94k4!ts>*i-t`Vkڷ]?M 9qPB`MPCnrt5=9j| H>qW֛Q-XkUɾtêvSi;Tm;]r&KnT+&Ir,j1*E<-s6VxY] urJy6 M(lXQxKH[n)$Sj(|Ct7J8?gEďWV8T8PǶejGs=gIc'.-QG#Rj~Zm*¾\*L_Ҿ;Pצ%2}ݯ>0-3.eYK8w8X}=jrY2DYD♧(b!)Xz=wIcc8kvtó7DoOԬZW {7ͩw@Nrv|23"ML9;2s4GVuQPH4oqG2ދ6yl1tBm^Nd…3H#1Sbh jJ'I%QHdB"dڲK+AקéHm21wh>o]TR6E!cq-ggj5(͝6]=X _W*{hzX z877CĐa~1U 0z%%eѐK)~Qg&&4ނ~v}]$фp{LT(S?,cbB]囏~;[-&l1)eI)[Le~+!WonC|;|uLZEAG$䨭8`DQ|B6rJ1SdߝP~ صRr4-`Pk YpK[<٥%(V]iu g+99JՎݪž`iѦM*O*w}v9ő<\.ُ[½DB9 #;8!tӤr>]IQ#n:Y}T~P Qд;{V޴_>oֆ^2,9AI,xb{y5Y9(l?SE;~tbTϴAgԫqqVr'b!v mI?TJrB&:h{"TIδKHd\ :/zUueP'Tʹ)b=w<ƁqyG!a⧑w]/Vƕl 1{54~B1[V liul?bvl^y-_ =?#|Jf<01Xu4P C7EKve5iOKcbЭBa@Ϫ;0F1gS;&cr^cDH v+ŴM'c䬞.8qh?ɗ_wp(.T"6OFv8FeNFk2mDl3Fra>|r~=Zr׫_]ͮsfZ٫>R. '\)o }__g{Pc?EUVf?=;Z >iyZ?ѿ?sgͧ0Gڽ]P)_bɨsxK<Ő( <1yԻэ`T%kOF6{L_3p9FDoS8#mC{^]lgx;S %l>acDk-CxE!(bS7auוzS2ꍉ[৳fkcfbr3[]T͜MoPnn gen^Uggt=-Xpzm EE, 3rc%g,o`A s٢G>䥅<߽]_'DUgٰd['z̘`=܉# OrcU1Jqd,pxaYe@#~x"h]9fNuJL\'i#~Ü}ae,KW2׉\SJ:5y-eF6'WѶ>OWO,YK_"wp? sS ܬ'" ==O6$pD#y"cdFsxE,'1wG3Ÿ B0Nv'6@m2 8qz=UEAYI-ٹuapϗ5)box_\F!Lƅ6XLY #nl5ђƅńĉZS0465PbUD1LM(!{sƅ LH?x,;| pN4bOrє):T:Sc@b .k*FRFk"`ANTcTN#;tmH!ci`]SjNZPc\jyc-:p`gԖ;ChSU! ~jrC6Ke'l{MuJ QOh\SY-MM:-X:#E!fu'!=ڇ˒ Fgu xTMM!*myu:Z Pj䔼K#D}G!Ԍ8Xm xu0q.@z +k`;cUv] fEgC }JAkJ\ᢛPc(`Iq.Tb9%Ky'P]i~gKxR<9͹ח<\r2`fMrfL@X؁aѱvjj?5!H}e;*K5[(s,YКQQ8)TΫ;$jrAMn)좭>0WT2"/)XE]ȝQZpRq+Mū(:@>*\(0yNl gUT؛{>ao fD]_6~0fβW\ȺCin} lEQ;s6Wb;TjwvOyiPf)*g94=Ao9߾U_MoL)q(aJÎJڞpUR'Vyc" OǿZ[<[+~uRcO;rðwIǟb9D-A&$o.7 ӌSj X{~պγu֍`D> qT?@iT1s80SiX#"}S/v?:s^Kv^k A@Nܠvm6}5O-y6>eQN=Nb< \J3Ļ,eP'Bu4ožn@eD֋{PFXѥren^gw !OB &hIj 7jՄbmQYu'S]* YB ۀ{"E%d32c[DuP* [z{#gY _OcՑF(蓀S22d ĶZ؈rj=kF<`6X"^+M)&35';Y6QS:'^S׏5_ imt]Iӧ6OϷ,+N7/׉'?L=(<2a'o_}\>$JfY(l/r.nSs{q\L\]XK04 Tߠ1.{^xj44>QպY0Yƞo ڪRT*)i=>xϩ-],٠7/UXiwSTX䜝]!F-m Ca}%`^ts&U4*Q1Y=FZSFTB Ե#q ɡHɎ5U»bBOSd+Qu F&b{rZԉN[MNAzϸ1Pڧtp”Zͨ ƙ!^G~9s;@ 'Qqc^:w )6,6W  cT(B'%'`_{[; Rbr5ָ0sMcH [ (CGFYR7N#h"LbM [`@{x6 { coח<~} ΐ11XgN_WXp+g#cvua}iՄCXz[ϰk gbL03w+(^'3Tg+|\ M,Ф,rZ#Lwmm%V7l6}5=I8俧$cv-@stvU[yBVkHU'VMTBqSpO7FFQ#mFcϳo8AJ+CXJvcE(#DM{5$+6ynˢnf ZLbki,8QJD@A-k+T yEzqy 0rŹGoԳ`~s9BB~E0\eG \ꋾ$lTSwZأ0RUR+م t#Tbը1т- h1Ϯ!WaSqs.?ٳewPqOx~y#ZǏ}uY-&1 $8c9_?IڳK>`[TcRME`j(|{N-13gQಢX5N#))`Ǽ0脷{n y@o!7T.8pj(G?c\{yN2ɠr> &gm01c61VrIsq" M5l r5$”A/)ʆEl@Cd+SQjjbXL˓(+f/O#{ŝ6rԝ{L(6VAU߻+axzw޵&>ڶ^1<{;Q_2fBy1*wb@+tjHxXb Y+(.svorK9vyn xJZ:6FRo]x,I@`woBrs<8S>᭾Q@_ /`%Dz2!3ITwJl5QDkJ#S||z`:H%ͨ? Ǽgim K(tU'c*=jsw]b$ԁ9R fZcdp%ͱP",?ǭkSEAm!і߿NۻpB۷C;Aڴ(P_3{yrxkEײŚ䋳_K({95Xp \bO$5̶:G_b;G_͏ )qj)fqpNɏ yL~Ouk9~o.w?ۓ2?ihY_VГ& z /ՎhOH-G !/I!d HꖐszBKSU\܁g旊K0[hC$x7l)hov>=%%b]iuKj?'>(%񤸚 }_fPl|׷R*;tgo^sۤ8!Q۠XvӭPvˁbH@,w;Bn691lŘ؀?!Z sԗ&jq³T_aB(i|Mi # yߝcI!/Fkg䞒NTmrsN+/$s}C#(U< Cw{5H9vL4;8ǼV?C,. ~5=/H _8EKoe0薛!/Fҽ9ڽ norKȑPjpŽ̤g<M[B'pOo> MnF1Ww}>sRR .~8+}魻, Bip|96>țnY{o_𤝠?~r=yvuήmjzf!|=j˯[ uh.Nu_hLk')cDBzWU&q-lUhiL?4)Sm Jom|w췹L^G{5췙6-8so6Yٕ0 zQ}%Ǖ )UT)UP1gYO3݈|=~{I(|{0 x/=XnvEKfCɕڗ)ʡAR8bIĄw 8VơJ٘_SFEZE](z"+HV.s͐e[l Mfflbo!(e@OBa.'͘YҸ&T3zgcH8p8RIhh10֘(֛) j [3&@ɨ&[/VKjlNE e;:L#19Yܞm"zokaMMLn4%l!v_YI"ڝ)d q9oԴ^r: z)Q^0gaaR Cι`$@(y*P}Ue'N Rzlr8xj♽󴵱]B*rȍ]Dwe(925ObRqn ܣx{0xjB Uы4YH'i~V(T& ^i0ڴ|ͅ9<>^qjO*!lcWƼ*Bp-,*h=k %G2i'dox'`+yC>^!/:'[e<؛n82%sf sV>+ǎ~'Q k:zW"p~SZV?'Wj}R}WDop8]T ŭW[|OKovGvO);}5WcXVB:$`삍TK-%X:Y9VՀ) h`jdST)SxaC'4tws~֚J(#=:͡QM?榋vvb_m|P/͹ 9Y 6qN!7Si Nmf]AcGjGn[]a VqbTiCdX)D),),AbZ)4(M5t&c)\=Kr 4 /&`Pb tV ԔR5Y!" ϥ S(L6T~SBn[ޕlKuAEmr2VBQLbŽSCIW{r%%*[&LoWLY6V=ڇ$Z]?)$c u͢[cKmk'n׿dO-KO'z VW[>>rGz4gG [Jn;LhB[SVfphUׄ}"zÙpg1n|6m^1 ~ip9aoF}zF;ƐSб+{+h ۼ- ޸c ֲkI:~N-n*9`9߼/b+N9P65bۦ +-X&*L8 *Xxԗ1rOT !82H DjeJY$*kU}druB`֯Ե%bT F0?:}f4A=cBm!G>遟A.޿QD$69!mC,!PM>k/ʀ1l Z %244:O),bɘ)\V5* l`eid@7[xb˫ EČfԡ?(8]H6G~A,@tp->ɠSDG)dwR,/*{Q [o|o;7k߯usD>N%{u\'MQ$EzhT\I0O:(BxcXTWBCa=݁ðL 9.9}c nuݍBWu^qu^qZx ȘD,[;6яW ` 0Fެys}zO?>u5wM[j[K$ꁧ^n}iI=]z=q[܉,VGMt D٦:%h,r*9OB6(!WeFVNJH3T5@f]E,b.sL8zV T X>\AZ?tQa_;of2=S U.`-Z+ Y 4IɥUSdVh5V9_@paZ(j~,l}a1\5z[XV8[X۵o$ I~ᷯfr>Aݘ}ب;lDV%^y116 ^:sIGt%,)(o~wMg A˗yۏ+(7n‚NQXw#ə>m/mo7>ه QK歚Ҩ#qR&+p Ȫ۱xإ-ks, z)jj%0Q1,{c=s)W5 O$JG/Ey?BS$Z~(Ira4&:&$F)Sל-r6C{"K B5HFWV|][_^-yx.L$W"#Z^ b&Gxa$Xzm͗-Z1Owc0w9۱w/k;S?~U. ›^nvNJjO^xP DNxH *궙CB:gh̡N#9h:b 3u  `yOFw~FΪT}Ȳ `7݌}S;;%`:b0̄긞[G΀vL_7^e: N/Εti-XVǮvA:oVQ#*BJf繮n5Ʈk!u^\kT1QEg-GNMO‑O#O }*)mͭ0[}ߵ;`PoICo!Hb9(XM'ě x?.8(9#6ϣs`N7nnt:̀vTf̜~/x@f웪7 z$ zE=҃dͣ1=szkP0 P >^Ӈh=c;;ߙjIזȗ )6*5,SdTg(KA5jmJ 9'ǐk/YUNs#[4>T^0")9ܲ.[9Up)bnVBUTcT!k[[D Gʦ:W B~zOf-a:u"d>aL&A80s(IZH=+Mfc3VӞnfnW9L&xR^AbX[MD6Qgjoi%# c3*K7%9E `y J:]8 l=:@nj-; מ_ZT%f@UXRPUfkX&`DC%`dKLiF-)N-giE!($*dS F ÓT> 1K*B"J\D7 _NLSPJHN~۪.O'F/\zoU k$TtA u"}I0dtlϫ9`kT>"&G+]3Q$Pz$^$u(fc gs?nCN[ץ"]K'/й-3}{Χ~}!쥖&. CL򵦰 MUޠDeR ܕEHAc @IPDDHK9|_כ7BI(*^p0%VS<~vHA"%mF+rPBpǹ]v0UUs.U]₇d|ymR{#zYE]Vgw{d#uuTURŠG<<_pnޙ.Ж9C e5-GTJMV,Ii.""*:81ZYP:Lm1ͺ.ֶY2Wڂ kr#Fȡőr[!HӍnymH3~\1rG|sA`N*գ+s6h`=/3? |A]x,K˜t1!N_;ǥk)%2l`rA%">ݭdУ6(U6/GɂD4.Q#k7I=DvG ʢAs~iYom^zC,hC8_\ _Zd :9g Vג-OfDK Sۻ@jr}Lcsξa-٢]>?;=>͋nVf<`JIv#tL C؎89ő8_!P i jfJep o[{\qU]3ڵq"OdV{{|:'2za-,f߾e4NdƉn ȃd 5ɻU se<:`r/&@\pDv01}j`VYL:׉Fob2Ϟ)>Ň3iyMN #) cd@'bSfNa~e\ Jn;!d'7ymS6ݓDvrY\M#s:b6m Lf%8Adw@v7a`ǿ/&g% 6X{sYj5}N]9񼤋/ľU]ZWO+ބd {m~4,=ĐlQmgRK,=g ,N#{sJXZm3vcTXz;skZ_%}?_u?Su.>HS<"Os\뉹ʇg ) ޵q$[d~_ݍ'YO[6%1$eǻ~CRQ8fa[p~%gszRtD2xn+!hU.yZԄzE\G$P 0˶Eb}As;M*VZk+BGF$P6\u y7WM:*Y}&raCR-ԚOX(C"Rي "zA1gUCTiEp:"4 JGۂ@ vv9 廇 N4>_çG@g54Kaq:Kx<%f@/+],cRկjgIe-GcZWAop=Q_ ,Ylap(!z "z<Ũv[78yd.ԀʀgٷHQ&V6Qb,UeB+|%e3x d&aP]wkLOP|u d~t󱅣OdhFRh쬟8ϡ# U>QmEs-f̶DRgDl=,| KHk!xWt1<8e㾊|bn$ἍkyS('T\\1a-BE猜sF *I͖Ya lVUVg#١DOpqNXAgc=r!hMeҥB:#1s!DzsԒ1 "}`y= Pq> BM@Uh9zHQ-+?Mr 4 տo|7Ljɳt"9a?$1X-ZW7ݼk-2Վn7-?V"ZAq#-y8$Qlw7FW.=6vJ3 t"$trJJq#dS Y Q 1dL9:pA.H+*h Ra*W>yѪx텯PM:B>hWB~ueTAY`[]DNIRhdr)ʃB:㬕K=) x_ F) Nϣq^|&*UDg[5:c0AS"Z]t&מMgïϴ(&ŎwZYP6Yk%[|mMs Z=]m.OQu3*v?MQJZ{ |Tr!Γ^؇0oxo5*2ɾꡝ|" x6:91LQZ"- lKH\)i֚8M2m4+Q%mhΰw;g~O͵6m_.<01+aLn;`jg.ژvtT; r~Tӂ9b!zf*TSk:bVfn1*BDK:=N *X F`x7r%NQ2#VY 7pyhӕ@Tm_=pj2_cض3˘sATk$1YY^Jп_r{+)I]D2g]8ُq8KdD^NdD^5}?+$ ?sq 8Xed֓gUc)3 ռ-jr%+{m=vk;U؎GNxs: 1סkX,L>0v;߯bݲhylu9˩dEDžm'"W+F65/{X5f3` BKn}`Ҙ|߂}cC/z9Ы=/ʘ.2ʀ`$IૣI.:]{3li*54OMH'I?s<@MƗ :qjQV[r]Fֹ<'yjzra.BrL HkirqK% jnmMVldqnlk"bT&&S2%Cƶq2jl ;<;0 `3>Y.ײ EeB&FY,rsmhWS/rN^/rN^Wrpb4nDH.9C$#<:HĤ6$Q 4 SvV^Jne9-l1ۋE:E#?L~)~BnyКRFs\oʨlpVL~^Q<|i`e/Piaf +‹hu[{yFeLlQ_~q$۫ KAM<|cH=&$ ^h֕xԴxD?OBl5D4@Ng[Bnf)9T4hoѵ0>}j=%&On3åU&['Q}iWwe=| sOɅ7Īr5(W=hđ퉳S1G A.8Rga.Vo %!ĒEĻH-(VhcHcڴ N>D ü!=MЎrA4ZcZl+Z4rH~U|JE:P(]0bv{*DYЊ=ID 3E 0K=:Fs֫k5F8d3Ab[szƣ`sJ=A2$IlԀr "Etr JBe\pNf  Sa:W؈Y%gX<c`Ry Eqh3M Y4| .J? j=eZ[Fي GIyGHoi\R" ZO{tPw7Z0-KvZӪPt}?NU!JsYx5CFtqЃ|J0)8A7ReLh)Zj ?@+ )2_rUr({Cru $C#,~Q fʙI2"vi*v(OJOΒ/~R(_rZT /kmGY c`G?UIb0G0 SZN eal1-֢$u&u=nRϖ{99wwYǭ+7乏:n &l+H͖j.U^@s=hv=>!n1RxD Z-Vߙ5l+z4ߪ^AnQnCHlDr "y/(wx,Uqn5xGn0v5޼дٟ1Ǒq #s_N"n֒3Z鐚c')__ߜROI'o4{|>];iG~8~2oӫ}>]?=(. uc?jjI T޴ 7{m}nrb5k<>q4^7ֲ9<:fOtnG+VR:`'{ݿxp?~ߺuA.ԕ]N@{ ׶Yt)8ܢ{nߏrAqTU " "hc]\7 S鮝Wݷ3m5K IQ 8B[a@\eL~>RzJH6&Y積$zGg6* *p唴\^gF,wRSbq`ݺBCWwJ')rNY;V㶂~BM8q+$6{9a6{9aWMج&Ɖb"Ĕ\GfXKFs@HiJ{N9!vb\ہZ%Ƶ|}fQW8@DOn"5@ڽOROV(KB7ҚHU|sqCsc$+i%!XK9.iU꼸:֖{ ja7TgGrtjAT߰#8&8thj|ۿT*#x)~4J@uPQZX5*YYF@\i6AFUg_U]Nj@R5GP -X0^ -ΡT?aZwmh缭)~?8i3q&v4 j*NLYPd$;cQ ],񏣾oO^1*@ ߷pZK#Xp<,}ARRD"Ij}J}l^`v̡A!RRt Jîҙlص]oص]a@"hCK ;-_)IW8c <1$hjA$S+3,\pζr"/C32T4U)*$2(CP- J':̬"u*#rNDBi @9g(37,V%CSAG,)RYI4Owb*\ݎ2_9jOfE2I6*udM-iGR\bX )G8Rb&7%ܫZBǎ%A˯AbPt<DFIa)Fg/1m,bD[pR>&IϪ/xV}v@UM-:a/DcV*eVQHܰ q`$a,(k߃J@bSHlz=*[ DcDE ęt:5'6i4`0HL,#!SĔPM*/{]gO9-$ۢ (g+d0Axg @;>#< x%bGƋ$2CѫBo9dr\s땡zv:a*l& {Lr'>8 lwesouHXawgr7nl6_{/xӹl߬q#/m{ܹdyq{}G2fQ[ݣȮH}`0)g'qv_hHUQU 6D\]c\x?gO똴mm1_B&+BJL!|@TlEl3sLRbR$XJM(VD%U_T)˗D/5U4%EB-4u)"`;YrQ6%#gEm|ERGe"EcZKceOi`V Inr\7z5~.I# { 7elyb3SJ8Q5{E^6;*3 z:;6hl|x~fb8O3uv(i ģϓ {ֽ'pm^.!5H`D)".zR$8\` p#ötcF~ R>΋(HDLic")OÆ+-=x>Wm  L+VgEurkHD,H@S; : YЕeU Ū R;KP=>OnV6S*s9,UR,`O--Ql@'ҳ)1nycƸU4E?Ӕ)'*'js@%J8I0┃@%< KpZI\YE̪/_84=6S{X2;8Hѩb,i2$1Yc KD<` JS:C9fρUpexz%i˕Ҟ%Ņu,eI uͲ (īY,6o2ɩ"t e˙ҍ/V$Rpi%h<=Y -4*-u7Cz{evKuHAanN7wZ@ Q@&LpIFhNlGeOVl4M)8]ؙB+aOU~)YXE`Ud`s;;Ϻ="|Ѥtpݞ}9]r/ K#+[\ Yj9ق 8Jju]tZTX2)d 盫/a}8#.,"b6AQ1`-QwPV>JVE$ ^Ʒ|ˢӿvHWՂ<EpVęxpt`m;OLgg$ndn\(>+{l7SNN{-o36ߡw_em toҀn.*u!CgI?Əc{Y?1?fǐ ?R)c!STdI0*P0ɐ?`fRiiP'f(.Rx"B>+_?mm!T)V}Nڣ=^}UQZAEj|Ʃ"! z%Kc)HD-]ʬ#20V\N VyX2D_ee &s@PY&  Ag@߆rR5qzK>׸q>tkp> UGD"J@Iǜ%Djp}IXϱ:)ēK|]#.*q%k,TëϝKXƒֈWkH( i2PιX[+ZcuLyX2D_eeX LijX#w.dx7,a\9GLCXXU|y8j|,s_>Vc>C'}8Cb:UAP>t(F8d4Wq\s0ěu'̑dT8 |q:F5I}ϨF5dȏTuY%yt|G.0͑Lg%P2p8S9 ї ќzU2ULzUƒ5Kv?sƒ5dUZH6MrD#FOW!Z5 Jj!ULwy&VuQ4JSs(ir"qV2bS&Z%:cbsF</{~)eۢ (g+d0A(_ (zER]$ Bӎ3ZTOs4}wDBO|h-G'GtGqd1%LCѫD(-Ayz[<Ava7p\j- R-7/_ocLZ2lf1K D$ƑG&W8+V@G'+%Son;VJ巐D١jUFXca|= X1aI[Y)-E)M@ ˕D['b08!SJmձV84ӔgP,X 21BqXTXxBr IaXkbH0! w/`N~I6{$[-eꖼL26XEKTMU ʸɆ!IF멲s# A: x(|勗Q"$b޺!5 z=+SۺG|+ k /pthVcL KgYBOv&2lLj;|'-PBwmT}N:PZ;;t3ΐP:x[؛:NU gx:W? \nj%VE6]clpLH+C k1DA@*@!H\M٦{h@Bʑ6F9R&d|&2GKE@ 0^aZ(5"sשõk߁uh E<=bg>F9%LmK0d>kvǤmeI9i(wUry7>XQ%mގ]Ju{+Q_&Ic JrllCVQU$Ϛ)x.ϴ`*^bhnUbHsvQvi."NhJMLòH)(حihpC~n%>G3Ov_:{z vTw;Cg7dr9(NUi%(Ơ@x.CDHCb(+M|m9 #]e:l{v/ `4Tbhw4:NVU&5Yy{Q,,8m6L ɛ-)N9gبRQ|:$lpQt/q`0p.xwŗ+aB pe*=.p!R 1'" SZ[(XkT_4R![nÙJM2ƶkLYp*YWNj5Jٌ9;TX,a )2fdE kJhN,.d~)5@;Exx.]=+`ʰ Jz2%o cR հ-X3\-vXgK#pvΎ ߂LΥcYcv{군$)ك@ @yZnC'!~%/k_p`9n2b._gvƩae[%mF?Bwgҏ òOEyvrp&AB~ėn5t^ywC$8Z8E(j~&bYTb{?6H{ N;Oq`[qk2Ɏ&"('\KkvR}k@3%b'Rei4ݚs˨l_X\e >B |jr`GiWz`@i']+x e4mU%L&I%$ZOaAHBNo1IXњ>@G*'\Q~3 M3+(0I 1mxh oJ!=3EGV ^ C .j(Ĥ5solH\NjoFUؽ2!ldivtjRHR1W格5ҤAD(:i+?6IJ'7N;b_BbTogaN]a(^˖lM칱ɄTkyB$Zb#-*>@œŒra¶cLU:0$vcoJvTQt)'+{(η^RngY ;T+KO ib_9Xd\4kOb2\'Ng2v_YB]S_Ynqδ;Wys[>l "$T:(iKݫ!W!9i|Ҥ߽ɬ7z֧<]w:EI,&6O$:8-cҪNc#!4$}gIdOXJcs#Ŏ͆s^`R ͦnI:&~9{.\a5OWh mb-njۇ{n]mdYey岍v~dY3~=Kz9 y%Uu!o//n޽};r,& t;On~Q{=c^{c AҘs;`j?ͤ&Ś1ZquTsn4 '^_c@Ol.:[1R*ZNjYfi?RCsoru\iʕ_\ҧ\j#pB+r'_n2Tzϊ%+O8BY,e dr~j}\/Ϭ/\[OͮybwѶI6|GtV~m>0ȯ֧/K]⦹7@6R^O.;|^(#f\'d-+$L\$SAK`WLDxe#6=\JVK3d-4Ti5j/Ѐ)nHw]hYbu[ 3=zW e/.*YF^(c ٗ yQhK4rIBxsfv"w5 jj) /WW*py::dͶoF89-y396NpH~RɖZ~}o_ ޻^x93/ekn?7M~ +.Mn0sOdf#p 6EUu/4v3ɗα\#*=" P B"+u)\:Pz+II P)7J% ^"@yźqo5_l/A% T`&1!ep%RI&,»SPY>|~iuQѫgv#J!oo^]W-PeU]p"%2gՐ9$~s +&}X;Y=ev'G>pDwIwΠc:LIĩP oZh FrŒW>30b,I']\ -w5j+ i17ek*=u'>p!U͛$esv;ZˬLU9ר`kEi=Zp~U ?K_9bM_C}lN{qƐknȞ~*>΢\ET==*vq+JJ[!7?jUzY:3M20Fp:~)FNd>o'roͷ!/r`,*;,ֈثk#VS-AEW HJr)&FN=mT9z\d=jkMQo2*O1£o.#m.#d1N9ž; HeT]:lFplM߾l o.R72{P4a+*{ͤh 2UD1 pNzZ2%kV>Sq|{X(%cwߋFgO.xp/VWjaƠTۊRi%eB&-٘¼ՉtxC*R< ]3_kw/rsN;KYզ]|Ze83gA<[B;ڍ 8qm(53}[^S 4v SX5. <)†h1f""H*e"[DjIRְ#D--41q:*OsJ  t ;Rt *ں7l@n^Ն":tjCo6{jC))FzS+ }.C RB0)Ah%txAb*T(Ö_+U{A˭I ӍxUT((wmThź *ۍ I,k_ ODL!xWYj%ҁjŗ_?42Ĥ0A XQVq03Wυ& L" PG_e!KA6H*%,?uHZE7Z32tsrX{NH.Xgq{}e:nZa5"k)RpAH>E>cbz/߼{5ǏqkB%PH,F0O< c{602 $a)RJ*U RҰG(ŢEIƋ)/ box%Z+AuG(vRJ(qR %QRD}-Kƅ8r46^ k|UD b*//Xb*m/SXncs c3sJ%pR %>J*5\1JA.=JYaeKJ*t%KF:iZc`iY\#E^]rc8N K&N}nL9xАV/g6JAcC%nzRHKqe2hN;́Cgqs,0vY5 PnOC]"j"(n˥9BԈzҋbӶYF# *s~aH49e;};s#g C܅RZ#a!H]RzOP]rJgP!*l&Hg;u5ou AO߻$ [4A{MG5gjlz>g8C>ӵyo95kKLFMa֐rDae&1b_h qF> L][Dꟿ0rc+~XdOOlϧ{KzMP i&1 CkUkAϟqʁk#-ardb^!Mw~AH-pQn(3%Q'8 /$\\YG-Y Y3! [- W~ \MfhW8el_H6}\b8I|W !9(c?p\zWOh]z~u-iMp_ 8?/quʐA{6XZ'~]c:՞zV0Ռ܌vTdȪgt 5W\.#ZU-CEsE6WhVCՑ^~S:fr7|˂$D(E9I#?iS=ꍬսo 'b1no}xw־/z28%^T|c|gQmLOT,C0Pz =aчSa[sN`[u^: D TP>d @a@U3$?@%G->w< G7DFîl՛hO-!3p҄]]ˮ˜>G#Ӫ_\tF߿@{(IL8WɩKCp%gñsɼzF2t߹8h.Kx]T.;_J'Ikƹ E^(5!ڛYf'UqE!LSh>r !u^mz_\u;]")toF!~x^Ȫ"s1F ^Rm#> _f`R1볅N'q Qέ# $4:B$1/wӉr㹹餤Sgr`f`#`"Pl0+Yeǿ,rsm0lߢoex'@U[> Aqs?z;8jye:ҹpYoVgG ݂Cm^;Snޛ~8/nz;:4=ku$agkr+Ta *_z@)FhP0W誦aa=inϵbrxj,o0˯Lm|)vS8,W,2.6(i`A@`0 sG5_RUhpsli9)wpo*aWB^ƒ+({5{L2 ұsɇ`}pM^ NXmpO8a@?f L?mc !JY#8NyVlHwm![3XnNܕzEφM7,藝%+_aKkFC5ȗhѺbPDuB(c(wj-XТ֭ y*SίIC -"F:sVn,hQV@s`/M]) Ʒf8!6/W *i> ̚;?$8t2]֩@ilkaueDvo~ սf׆k}0mOb!}Q:u/*SCl~E&^t^_dɝ% /VL7k_5pA}NDrkY̪P0}杬/+"멞B>QyADw۩uξ1#3cJV[^;wn4{&)DJ !Pa9:1W -9$IbM̌q $핁.76ZCj-LyК ^bD cumq_2eIYXWprYrN0UЂzMD n61XYAmff-w/[ o7`BB꫙D rL[>Fe8[xvnK-2My["\•@F8p2~uqp.s@j~8nUS Wg9ɇ3.M k3kp;C3uAQ U+#muKD JgD6 a=ßkpVs!1!yA6u]xVBXhӐnĤ>Uk0ݮ296Ҫ~oFw"3n,Q#Mv$dQV>ݝޠU50w`7]ѝZ[! tvxfeFᗻ0h{ Xڷ_޴Aq#Y8/_dA =' ^muQdzդ.DIl6'ǦHjؤ.AMO]^6)nsg:@dX#-eiay@ c/U  bJ,-a ]_6 V P#mp%Mڨt .A/ЋVқ~QiZk[]~o}r;h깋}<ҒS܏O=kٍѪv̑ԙ9U [ZxYJY|2K`/̺.<+8lj^Ppo}-GncOn)}3nm-=emG㙷Iʏ&<^sm=*ȫm%;dCvU/FøhH0 V>+p2Z4G17=I 3o ,{|Y6j&_3՟ʸL"!$HH4#rf)řX%(V"Nb,SF ܘ)+I?2ZU:\n! Bi:jOc5Dp꽅ixՑC UG.WGDыV"vAsR\(9ifL+Y(GG$ BB"#SL\.`>6!FELh^==i4elM1->~JԜP\\i i09'nTG00^ێzK#Jm(ral_}9kYxsܮlp!*_;z=Î.#c 5eR(b!YGHRDq i0'DEX(Ii(q!U!]QM#`K;y2D٩5g.~sᝬlz ?ȉg,n!lz I8RE)S9Kf$ 5I$RZL šn˼X{$/8 8F=XMkpʍI nv#*5UOb8ZEHZY1pW#p2izng1 !Z @sJD$ e$c^ QQ"02Qq0 5Đ5 ({^hb9U(eQJXB*hqI v8rd/}q8zDa^֒1„TQar$$Jƈ E4yrdW/},D E .UBrq- 'ɕaGøåtEX] $mVDMMq'JsDbA&,I "_z9H$U/*ˬ &IE0NE 61D_]FOj=FuMfN(;jޙ9`='xc8+[QN>*A~}s=H@qsp/fXbxv]/؝ûcن%oWl6.hCC"o/x /?@Gi1 tq +0m:Ju)߹vs A-H}: d3[q/ӣI0s7<  E7fZ6G.9ؓ)OAj`CV s)@^eΖU #7f: qlFN'+9'c 50L0n2Q5 .D%Ӥ8jdz@vo>]Faly܅[vċ=`D``OG㝰0N86 .=NeSR^i'Tښj"Iyfq{G[ba?օx-Fy{\iu`sW'0[9K!gEXTw"{U95FKV8 F ?rA1)nL&y 7;Jq#*8"ٞg4ǭ8UzM),CJK Z a&oWS-I.l6fpFds:trqs.(>O,:`%n+٪Who݈0B~Baˤ˾6N16Yoc--HXc)_M#v'.@5*~ I<t:+dmOùUOo.t8yT3зLAg;TsEYع9>B1Zu}հVB El|Uc 5SR7aR0n̚h9VJ$~xJQHwD 76f.ºt(t Iݤ݊6+uἈYNYH"n*@G0=tU t@f|fo%j`9qcEc`sGN$ԗ[ߠ-E{;(-HI"}o^vD\hJY6fr=-zɽr3B:Asg~7, ˒z'S2۟cɧ/.Y iZS/lUh0O==ɰMEMY{qiw~V쓅*{^ٔlKB5ZM檳ߊJY'yjm=9o@2M&^/8P$@XlNV9;JJwedd:>s7Jk'5eV2k2I}en[ j-Ѳ`4,҂ tkE+2ݰ}%WD瞾2q<-u޾p9ګ~JyjW[渱^ zt<~p+"[nvԞ0QonZdD{IŴTÑa5:7@`GCK!p%g#⅚ߵǏG49XU࡝ih9 *za$TRȞZäDLL{[5?fX4hdsKZAˉ9㜻zQb~1#@~+;ԍHJ^wA|׸'vNciRbޘyX巘FW'N{}Uڸ3:z>f\P]wNޔ#sDRF,N.dVfnIMtϟ̆{DJXΠP\]}Xo@ X+㙆Cߐ{C&]8`I4 ahR̓?c`.#heWbn1NE~I{31>#(xD"fH* 4MYJt=cBӹ`TZ%kqEb+dmAtH"5c;$3]e[)KɌeBsg$JX~d=*R}]iݯMLbDHaEZiE ;ŊJt<$Sf< q0F*?f&!8HQkbQfMF!#%E @QIٓKl$$SUX_̬ä&2VjQ?ot_:Rffl,fNi57$iY,٘_ǁ;Ld)j%ѰC)T'u&Pe$?M:PEz,`OLvR2K;7&TKvE9Y+ZS,κf]TքƎ] ΅EK!.L[p|73$EL+{Lxj;NJxfց L6s[K(yO#kqigӁZLs1xg_ǟ_spFPoЛz'fK^~~ ]@,(\<0޿`:gb84Oa S0= f/Eb42[wh[⁍3 \;x4 ||;9`j b^n>l_Ń~;himyz}&;=M2j(F$$$=h6AjYBdO'D(VGRr0P@C5.풾&#ayEww('Ҟr9*PI3ZlfL7z-Vk} K UL64o^Y*2_w7WN;R6O~n,i>_RO@$ !&8QDAbax DRRC<4X_' = p/UU,TfT&'*OvUw'U`"=c璆RL1.ǭlf/$/%;X#H1>^'kPL%4wMŊG}E~/; K- ԠrwQk_>Gӫ ^+7\-ǮlcsMqnf%-7>@%}!!wOȢ|R+Y@ $;XAGvh d7龏 @9/yB*#y~|slj02O>3*8T)v4ȗRa=r9z34y wq KX/V'qAXG] CcU@;CFD{e-%io/D$>1\M8, 8t!#R}m$et;32%AY<_m0_f je#0+hu>fLY(kF'`Vv@=0,~yϿ.%7/21\fBc 2wrlD坅ȃMsd`Qe0n%:0 JY$HՔ0R$PGjEh[b]Vܾ.񫫫(Ju*if1ڦ)HaHIj3%)BEIM{9o3~%d}9*Ds.Dl)%Io jKWo&V\gwIM~bu\a=ҥP Rq~c!Pl[ex< FKmq$Tk)4*BN4)cFJ ձdP 4Rʿݜs2_v|+}\qL;U RYZ h1ȸƂp/w p;ǾR3zoZ„J575;^4 Hqk PCpsKZɣR}E 2o$د@{֦w+ӻC7u4ai6n^K5da~6;zoԯpaF;]ܝ / C1Ꙡ"܋@Zy(=NK˝[鼷p[xc2Chk^c~[M揶peޛ}jo)Hm AD ^e`k:{gU0[nggoq1 y T<^{L<)} !-UvqftV8%{yvW%g)KLa•IV1)$Oi54F8IH1 얕ƘdVnCyzRB5$Q!"f,\0P4Jbk$L4%ANd3=HH)$mn&;@yd%$t՘V=`p5#B+8N(LCJwӫ3M@HR\DPf S4XQA0EIc3( +M n% QsOTq]d{fP_[Ւ)/P"6k4:1ڬI(,ZNb5R @a_TLa,)BRjsW9FPE][70粹5Ğ{tuD oid Yd6.{KRaOSźNvI Ԑv៏hTq"8~"UH{,hcFomE}U/R~%+)F?2!X8ޛئPc%yjCByamnD렭>Zpxih+B>NC}Ijif9Mm 'a/xOb9\IQt5}l>TWI_TtIэDE7{tI{Ƹd rŖ}B>>Vڹr*R^ErJH}B\SLQ2 J'GC1K77蚿>YQ8e{wk"dvo?fpf+='B`jۭHt: ߁B(Y = Nrxar͋b<}*d0e_jᜀ8lA1t13N`v rM'w 8)hkrKE%>Z5H5ydFyA=㷺RB~ux)lqޗ6 FY~V\O$ŕ,OROA @Omtkp!k YjÆ0 .z^Z6V8Е4,63~{ՈdOwMEϏ k!aOtp@cySϝ0CNiZzdW=OW}7S 7HԸڸ4! PLT&+&w*;M݌KM"x‘ TFH f:$I*˜BaⰒ T_3Jq/4eWgq"pZEHZ )4c~ =c d-Un:A}c%Vn}9m{Ћ%/rע@r뷥7& !3yPbS#zӐu51 N6D1Bh>ы,;o_@N?LvRCY8]1˄b//AK^Un6z)fz-:;W":݉3 Pbрݝ{ &Vl5ZDLf@$ @;j."f"sdZmbKG[)y!dsBN5eE^u42,&x:m^>Ah~ݻMj9cTj9v!m^ykwyCcP `_]nbuºL; s]b"_yr,M7/[,Pl8HG>M6zZ­np[/Hq@*d2(M X$"mY X͋ Idr.Uf:gF啠@[b1  1Su6gc{]`~ TO9Kn͊.eIwyz]0B+sN` ׉D]2R#0 ."Jgz8_)e1kRXx$Djla5Evx53+R򈈌BRR0J&@X$[/*?i>C%r+Ma/^ 4w`5'^M LqLZj䄨U1HPJ@JѩzU̓A2Rb$QQ^^S.2.*1%)gD(4W>eIffh$CZjH.80nz4׵qP(c-Q9~Җxrm 5ZjgH4Ur+Ky=(p-ʼ[/H}n^ {?+?V{5:|Ǔӗ_{'*GOtlK5 7.iVYE֝7sK!xN"0DxbFG1fr=QmFW\|b`+kpU4ry mPzYFk` p βWzm)G_d5Z-b2IS~H=6ʳ}:֛sC[Vi!|LJ Q9t.$ů@_~߫<7XUuO3(y2pϖY㴼Jab{R2KU;/_ջYT !~m!P6JPVȥ\zq_^>RAvZn]: JGhd M)/,@*O5d d,>aEƴ@h*1X*jonhn-@ H{m-fV̀ڶ6z6 I+yǁ,NO뫻@)_5)yuOSOy;D !Z=C00H(?Q]oKax њMAY9}esnTgu!*Z^lo¯E|R #Kj )zP!vD5onbi4eXEs czR?̝iQ欜Vj*w%4kR|VIIm<\0Q<=ÐPreKaHkO* x]!:ƒ6fi=݃i56٨rr:"6AeQ1§5.µi)!,JVf2Up*yG:jiG-{\QζoNKVnNX@],w΢?j$ !YȂa)*HAĸO!OXi8eyl`goȅtždaS# ^ b e .fWեlXݯ-Aw׏$A-"YDvhׁ',O`eLeE8 '69~_lsy;I]"O7@n)e}=Q JI>,':זA,jF@Hm~CB(h{ԗVu.:0Dt֩cnB{@.r*w=<xrZou\00 I*5oCx Msb8CpCBMeB3栟I >y"L0%fzU3xb@y6[Jڌ׹4Yz4 (<%{-FI0 "OٛKZ fdR|[> 11Jcj .CMořl){-5c޽{? J8701@8B<u )D@ڤf6VʃC sT!de 8n9,eYp4OѻIO4y4[̟ʿkFؚ@`?dm_}sv^wuެ-MBĿKEunCLe'>m|VYVY#K*ÝI"0qN$)XB)˓D"$i`~DeW.$gDP:Z 1gS²I+5IV$$.d.̜*9D ӫ?oJÞf4S,Bc:?b}]Ȣ;hd}'\-x>m̻QiO۹pdP&//H#m7/HY˼hK8AYJF, K'RBkXİ$~|*w=,y>TN߈)h'i^MYN*Kq3*ۺ- _JͨQxVXץl(4χD;[gt7.M_}MgLJrX?/UX_NY꧱2V\=eDU:[zSO-iq=?~'3O3x~</?3jq=OMA!'qt<#2sD@>;_xtc|8ᤢD.pO61! T@0 .h^L&BX.jRJ Y,9#Fy*vT UnIsVwOF0IlR:" hʩÍCWbVyJ7IvDR":7VQwrMK 0$֩t/EKl3.ӕR4%fx}V(k=Йopϋ|9c)P`fdtٗ\gsY8/8Iqi?wf}.*؟1^4u{miL鴢ao1O7Elhs^oQ/Stdt=ڝ :n;6#/f:yf"ƕDk};:Q7mVnPnX$=Q{fq~7x*Pnn=͐o|֟S*YĤBݾ٘e_{;p{FCr:kT% /6UGJhRPr 5'cmciܪt~s^9H 7Rhi7K" ާ_TS9z&~оҲ\ᕄzCBT .' BQ}~bRH]7P: {J*`ML0 ՐYB/FN+ -_3`mvܓfaW /X A/Ә'pĄ"JZ+=/vX1a?d2FНy# Ņz<_C7j#(߯?JnIbvuRtdK@o0⍏/6¡'B `1z8GPFt[Ct[; OWB8 c/TRYKiyB e+eDYlSBf=[hW!NT9GAlj% IN *'60ך)QK؁$ \!A+-  bGC`i(ICQ1<:ቆQ _CP.@{vv]N 긄sU:o*bL3%{nVcJ@r)=wI3+^yK7f#<)MC2F/ӑttjЂBwS*(!4VjϤXQf"V}@cD2 `EN\_^{z *؂>[IPݤ¢Iq2N jUCIrBoLLU2m7+%ki^:r+q>GogƮ>sQ]~+Vkek TBh$=Y3x #mEo+no"GM1 $1m}@"=]&PQ $D<:0!s} Od璲+]֚t y"}̐o|֟S @`pꕰ GW @<܀DoEjt.G^mTltSb7[A!2=Emva h3z3r`{.aEKO \;`oRB2ZhuS 吵V*_QN4 qeVjŊy5W:F:ꇣQKC sՁ ;l纜Ϲ,s>f7>E)%&eX@p3'b{bZ3=h>eޕƑ#"ewN76ݳ3۳Uc$3KV#U"phd >vsNKJ7xզ h,& E 1km QI2vujߞQdP[IvRu9Dz^A%w$!j܇xtmImI=`.e\kYf 8V *0LM oC@p l2yK۩eX  3<>1[FN8ǐ Oy 6391+USq`g)IhziiՃLyzʐN35tF):Qa!JNpV ~R*PVS {0FRw' vzA" $()7ci$x_ӆJ/A2ʅ( C34Ox-}f9W @*9zD& FJXprGj| br HK/EWd{XJ8_,h溺c_WX^>uE< .k %JZ2<7k7%Ԯ ذ=S8_YjElcV!bԂ%n$n}s:kqhV XH@rJsw~('"2~Y~ਘ0 ?eqJWx&s55Ҋctr L/K{Z|%RÃuBKRD'ԺW  Omtu $y:Zx7.c)LףzP xza8;!ʱq0v;q0^qat ٛQd {遆ɚ!=5?Nmj#8&rx Wʖ`r.GekGvU0nzwv˶{M6#0֤4*o6z 9# ,.z#+QdzJ"^ M4æbrt=P)X6Y!FY 5tͺ T5 p֕=H@u% ](_7b?2^xZX֩sR?[֭㔈A;jM+e߷$K.7c PQ.'k:]0q<6yq=Cq]b42c$)%rGZ#TaާxRA00)>dZudSizv̪M_LyNcGrfU6s}ZP_yqDZ~M6rAzf[s çBZfj JGi~M6d[ˌ?9Bn=gf_D6gS 0f&ۦL )V,H煓1VrAZkP`Uo]G3kN]aFnӪ~*ε<˖aghdOcJO~ce;UZ:hUTidTi- h$6UjN%o]/ȟ(9 QݺwXzi>ޱS9n(CΠ#W2LX+% #* !1aZoiaC{Jk[Cz0h>L%'ga{w[I=ȁB8h;]~ ]}Wegimk0qS;H 6שNj0q\.TKUf\Z 4aF#P-`{~g+}݅>Ko@a!*l3?3aQ}Wx䵴q)u\[%7ur凈&7M"mGg9|T" |(;=ca+)2Tq**H.E0/`Q}P_x7ʁ84 Am҉7jl.DgBٸWؙ(TR`M9@XAi{my; (GR|o߻V2 Cͻ}F[/Gҍi2<^5%!Z!0be\d9 M(imE>W CgkֆpdEPRT`|Q.j'QړYTե59fBt@3iLFM|0U0*LLF}D8l@#@D`!Ae;<Ŋ0F:E`<"mʙ $ySō\ *y%޿c>b(g1I<9"(yDJ3*] A qÛMXЊaZb` G`V)%*`LPXL.gA?EY-m.B2I@jG p_iǩl5M$(  ]^8>n۟_ݰ.7\>0>֥\ۓߋ {/8epZiko_qekD)8‹p[>\7"Zsq#g_>b{EPxagj1ϩ {$!30;r0k};^56t)W˔o7˴CpfzSznL瓔ƔUwQeԊ5'woAKK Kc*$%/Xvݑ!DmU>F :'ib0&Syw;~;~@5XA"R`QFi޺-\ $Ra`mr}܆6Y+I{ܭ*wr'1 Ue9}AvTRmQI^"`㹪jW")[5wM|OѲ :[tۜFI窖t`aHQsKV9T?vII;秓OOѲ#dveN0̠"Zloy|FnOzm[?*~un4R%tIOt40ng:ZiK$Z9WÝeM>^<=ލϲ#RsIƗF 7z =X"/cJpQJ@1> (3h8O( #Q-o?Oje}en_o{T [s<~Pxye^~ӗi4N|PY˧ UG1bߺsL#Τ Nl-@@ZϨM,#J|Щ^ 8֙ ltܙ "q!]|v.=rz=F>Rbmd`Exp> ։%3~% TW[t'n V9*loV@Lqf0Re=kj^S5& ?s"S@-H(Q:{'l"Sϰ$ sG%:BMF,)ڈVA[#qɩ!msCC \A5S1؅k1IPvKSVyy DTNhp֐}o~W%(T| zW.Axd[d*dg W!zdUO1-uS `ȴ..wRrBSi" >dZӃç3iX-9o4:D-Fum**/7nQ}Oh@ğ*qا*6b?5U~8>a}ca=o@y02hMQ:ÿu&Ed -W)F6ށnĴ"ݷw4ջDlɯ6Ix\Nn]ۈk㘳nقzhMeu=eB^Tw]֛ٻDw>2´^˥{jhNi*b%S;YScqKbE1V>}6à &SQIRTdytMM x eu⳾}q9_,~vAe‡I Ta.>'|JfUVX x EaH> 'Gx!)„ \5Z)O1RHlmYh[&=BF)`E'vͧ5l$o6z-La#W)6lxW ݒQBsͲ)'>nIx\$sf(~5Ĵ~+hQ| M4Ǧa_1lzfa[RL'.mU`T[ޭ|&cSy͢s-$+\͂,UÆه[)ܼ,],e5@ #vF!@uzWYYn%@Y`l0'<D|R`S.Oa\>![ꘁ\ $HA~$7gY>SdLQhl'6;mʜ.A:tbgJwgbwŜMӝ$䅋hLi}uɺS z iШz.!OwX)Fqo^ED8/[7U;OVHFSVmi(Scň6nH afZ2OvsT(4P h.0)DLuHN`E'j|Sw(!KPDMK3=j}Vݝ+,iu!!/\Dd˃$KMu3!JiDtJƺ1:@Eň6n]H T|Ĺlݸ:FR4gn";5]hSօp-)m7lz0t],^i{ @xс% Ji((|Fxju24<@B^ȔPZm*L1o,? >\}m.*)مDSWIP&FiČ-&Gxn"a~lz7چh"^{ᗟ0'YtɒfgsL SXfSv n--Fa{qFrfx`N-9fmWgԐlѲ[SdvR+ljv2[8Qa {urVts|/L,us)ur>}',Ѱwti1`IK0w)K8݇%p. P %%j`I(݂K+\\gJYTmT1J{UmP16{w;3ٿ\L+QƝ3 lq՛>v.8:#֪]O͝+u=cE?oJ{M뛼8!У<\[eq*oWt>ԕ`tMQ]]g- H5/B"wW ~&x?l|d)Z| D$(! *FT[~3~;1e{Xb3`Y:g_|n~T#6ygZV3R^9 ięWOӐ[[i ]kYsը8ra.ڌnjQ^O炴IS`2RE 6Gَ{l?%KZBFݡpIFXȳ;OB6gu[@9告 @ .yƪ&iʪTfQ]jĜ~1D0jSXԘ#s8;! ּp3܎+N9VԯO=*1m8!#RyPRWg^rr0OQ$\IQKyR5 <*(?K*Sw6_XCaM><QYV2@ 9RD4g#)nPR; 1Z R#=" t1EThNPbCc-Hd\*0mKtsq/eJhfIdyZjHEuD|{4us9tޞA.wmG0YDwW~..G%(hCvRMM$8j1D#RI tzF'sIH g m-i-cGcY]GR$[ikքvQ15R\P/Q7$yg8 #QfVYAPIsD0I%0:1_DC1ܳ8}vV$ʙt4 9F(EA$' s C#ETTD< gt܀zPr? %4id?4aHN-aBCD)CMC hJ &j5 )mtRI  \.bj6=)rVorr/׼GhwյMA/wlg#Ot߮sŽLo}}/v.CIq^yxkQ`>jx2+hxy3yHڧ{'q-@( ud'`L)[XN%Qr՜?\^RU!QF 42߰l&񚷴]2 ͥ΀c[[W?"@Z+"uZ%w*ni!wjÅrq9I"⢍!iϗeDI @j-w] cE3 ΠԾrhalGCANyՆ`r0/WxVwi1θ1|[~bh+xn)pt388^9> 0;o4}k 'aHwk6vhKdMzB3F՞A(Qf"OVRF WU߻J]f\=?U}?m$IgI mQ KAuz`. v"wq?{6S3]> G|8[Ù/Xfr@Þ,!hc EƼPJPZu¨{(Zi`3%=rTl(Z;D:HlDqc"vdI/ m{h+[$'/9#ۣdhl@[ =]gXiJr5օ"9p9 JeN N6id PF͛aIZL@ 2WH939fBBpZHf13!r2A Drn8BWU6G.2niBQ {o4 ;xW?U P͗dEHg kf{{U 4ص#Ӷ^g{l,0)d:t_p޸4_}y\.|.]v_@M4cmW+@xKZM4Z (1ж$]N#F]L[=Ӳ]%C[';ͺMG fg?ty/f OPoE n£Ӡ] z_G&Ni-n-ux̄w`jHxZVtcۤN̽)7m߻X>#V|)%qNa^rCnR;Bd(zڑ  )I)](x-9pc\lQ4S[ }dO+LE Me /r+p" (֭ ?V'Hr[Hj6*L$&)Jjft_U3Z# O6kQf6T>b3S}M~y7e>ZWQ׍G~,]ofM ֗ݏ$cӑT㛧Q7eӗ}gS3GSUoٸiM3{>U͔UVK]tӧ )|>acXq;]lz4_\.#CI`֯οd4?_}#}|]v!/su)8,+reVzy<94@oO߮ÓCxTGW'c $Dzi*0dxD"c @8(Bh5?Q.sR v" h<˕1*s0rRm0ˉ)9sPX#]d4PvQ@굦{?D)rXR0s 6G *598+"Hf@YQk5[= Wq +O X[oKPt<@ӝsf 1ߜKNGrfm4<"\|?+3ʌ~wb"=z*d^>չ]{n{dJBAcnS!g߿yyƳia7=\g3},Dc_8ӂZ jjg+s7V3dcIؖS)Dm(^،.l~t21zVhmýQGQkϗKoD9/ Pp#󅧩q~}o}2J]Wy朝=hPSliȯ%UVL`)WRkOo'7A$|Ć[o(Go~[|ӏQUp51@ 9v|Hs*j@ƻo56W)BXp5_kfH]v%< Ȃ;xdDڴdx9.AkEI?,XmSoklڮN]5 ёNSDa{LMW'Ӧye"\;aJ ҲmN̥6|( ٧!$L7Hh& ǃhVP.T:GZ!eS$cZp!*$/2-&Mځ b'ϖY+ 4!jr3?1Ne& AY äPH 8"B<jR!n N^ 2<,]XL,lS&}! Ą3ڟ^sB=1vucH=G1wq,6Cn9] +-9dDRShrp/'EִdѓŐFšG__Ճ=2q ;px4$qX]5Anxpf qPhՌ1V g1 {xP.ps€bV0Bgh i@ 1N5VcI,8(,:#"U5CN|O@sO#tE@}8 &0Y:Ԏt_O5dL95b:sLSU$*W0@vDi#}`!AnoAR!1Ne?~LTU r~ t^; 3Oθ[ԸNJH~`lb k(N ~Ҝ*4w dMKm&=-a\8ߛJeRa3gS uR}LA̠3{CࢲZPvJ?"凃%?|_cw-/9e-L'"}x݊۫ђtE9j]-9N-8E%w`"tcsfK{.ؙC*nHss6 \[}‹N$@۹qu].mkDbX6*%gT k]nVdգZ >͉r+n@wwT@E'7|=]gPc|{-UX/G룠߻vӡ; X>[\kWb~zz c73Sݺ1uOmH 4N~|SMUAj/7s,?rSx6}.rl.wOo_ՍFʐmڞ>sgBtfʊIђpM)Z;HBR1q2b[6}Dv컎nRiѭ y.S1nADT B\'֣Mx"%[o`tkC^6|*P|֗eN3yﹹן˟?]gϯοd4?_}A벛̪ >J2.{fC5APio9S*zlEa h3T4E;_h. Lx$я~8 "}L~}L%(PADž:{xC8 ǣ`TިGH"LAq@T a\4he<4ZJp#h݊۫4ƢnuZFKefŎ~XA^G8_8Q8s 8 Sc%/;wq@#!9ˆWh;<BӇmݛH%&[QwJ'cZF:YZS]Q)Tc"0).sŇD|vcovqD A*!糩ePSA`5 }H!hC^&ѭ~[*!ZFLt;ٔGnO_ nmx w$6PGniNZqx1Gty.DJ&#pϠ]GoJWL4?7Qn2 "&=,:^_{^e7ӧPZqί>kYcĽMN^ݐsO6p6o7;VLeCxi/o] r9tuY Mk2w2ISb^;}$7aIH['$F]<(}7bYj6KstL=G'/!3ķj# 4Br\ou˸ض2)=A^X9rgN.:h~6xŬ,l .ϼR5p~8_Հ'rUWs}}>ƅf? Ӎ,ۛ|VEG xRclwztNhU<1,L(pm3qKhzGa,x3qI>nIcN')bTsooݜ/l@Xa2ʌ O O`=mnxլD]+ cڽIɄ"d6(c@< 9*ic'{ھZJv8-y~lJ>Ogg3imi$%)f{~[VKVM+ cw_Oܫ*\5DUOF5ҫ@dE\1Po/3 L Pү?.^zVU!=A(?)/<ՍOn^qfdOnU SzQ}-^]^ W y}se9>^_Y?6I?h\E*WE`YM2l%!b :c R 8?Lwe 3qtI )Jh,'h eHTT INC#P:*\%2HVJH"yIJOQKC(& ȍ!܊ZɼӔ~"6zFܺm,9-%J3’Q@"9%ya{BK˝c!%Htsh%,u$A8$`39Y˴00Hm@LK>a qjg#ZF0W;+ )I$=>x"rF3s]IQy+of Iׯb},mTEHͰeA~񪤩sr󪾗BވSR?M^Z}>qDa鼊ѯ0u!ADdd ,l)Vow}'z15{_T2%ξ_h_' #XLVuD33E&K8]E218@kN1.׹vG8RQABGw})e$WX)ހ rЁh&0:I B"t2iP d[a;U&JB=%|t` >3!I<=I!B$ ˰= t)&%oP8>{q쭠Q'LhLж쉒'.`XM>Kq~3I /`-i2{1]Qz[cxt$qT{AYmjF܀[6Ѝ ޡArh\|Js~ވmn,*'^|U\m8@~*ӂ;I((- YY'ΚyqC˼=KYL-x`9*sX):`4o,F(0Z+[irލݍt ?Ă]%r!32g$x. )&bW`P!^B <޽X _`/9&ZȹkPTc{'X:2&4>a K0}8+/]gדw{ iNR)5fy`Z8 jEZ߭8h=514 9WCi‚"ʘ615N% xv9Cc*PMμxk'{^QІ[`3PʞX")bp֊9,#lbnM5˝bl&R * Z7BfP-nqsC71--+^ zs3vunѥ/~~;)g?܎n}Db'~7OrwiطƧUHgJe5A*Ą)S<䅫ʽ=cH?+Ŭdz ogƃ_z+al>XxFhoa_2 zw9H:2ܚiA5w~-w߈@/{`1;^[`ZH hΎ@irNyD~(8ӨRzr?y!֙ɧ1G,g`gHW KgtT[ h|[ZB:V߱U_ .۠(v##Fg@L k}X#V!CV=L M3pEb6b*DKN{V6;wD:P7,FVm^*nK6Qx$k)gO?BHr88zg&7rLg2y~,MpƸ,CkU*=nr+͠1C*?Eq845NíPZGOSt.K L?xYqeKHȮk6X>x?(^ݫe3'|0~nA.MD\o.?۽݉<u0nd5.U׸|P.{),iB-;FRgekyTn'A=/әɳhVUwסBvt<og~v`;D>ݺT;ԭo9M܆fƙlN%g5[J O_^{4U]}irt)2)l$ICa~ķbc,Lrix#@Fqi-iliګ멻ԯ] t&N*]tv>ר`ՌCV$%Y&H!; "ɵB&̫ʻ 2M\n%iZex1&n|̸>?1 \QN33 *JȪ5*c[X`ꋷ~S-}$PV Qu͐1ȥJ!}ΤsҺ)8N#I>GNMzi>!ؕHЧ:D?w7RV>TY JLA1$":# :sNFZO6iJ,OG?zRdRNgcNeC{~}+Z=@Co&a`~24)9+a&GiRdTQ 6^M Ƹt.m B+ I!i-ͤP0^iˤppQKFާL ;w>d!B0E;%dp$GĥgԖ5*42]/}D!9K1;CcTP@zx.P3)d`kGк~[zh"r0Zʂ4E=,wN' b#Wh]F.Ra߼F\~|rXÇ W(GHmL>`#(Npn)Ñ^`8vI+j#*],q=4Qݑ(@83X?ޏ&+3O?*PT֪@KyUVU3ǯ>9wb=3 5#]\@SL4Zi,%ya{+jqޚ!PyaHA`jӌpGq a0%VÌ9 a#[`ҢAQ[(iKTi(_Z'mC^cm,#0U@5DFrJyε5 d<87"k('T)ꄀ^ ?]YsF+(fw!}J)KI*8Nń") %@< v$ 1==PЖA'T|-~-/|r87f8Zl2ut`9 Ďt-A4$;ٛS-0Fx3ͩR Nvl7)ްGK17쩒LnN5kQnLQKT Nv6l8{';MS Ho:l8H5#zT W)糬N|\Qa|Ɋda:=L]e2|N&7+|9x&Ɠ2yo1x3o{?,/NhSq@'~f[pVMvU+g`BR Fng v!=m65V׎t\DdѽW8`R1h":XǺ..tk-=к!!$jcUkQ[*MD'XݭDY3B$aZ9>ܪ&p/ nco1 rDsᝏ%)1965M2L>[&嘩>>ܮ&/L51w"L?3C9>ܦ&0.s okkcDa/8}1 1s!scnS8fr̼~^csmhĎ]a}1 1stcs̟P%1}q+S+ 7Qgcdrz:AV<L0l⭏(e~em_L~υ^j XZ`oLg!(<3cx.P;.?\(uͽU.g V͉S+Ǒ巏9e\ (sXwy_d]G1F&,9pL1 _\O]'g4vHbBcY+db*pR b?j!BbzWho7tIf.޸NqTESϘ"qru`6GPdPx' = 4Pp6NzN/!U(l .,"< ]vR /(%87 $ƕ Ep[a]L!^{(h!.jB+D:>PV4Di9' |0zhC|Bxn2_YkQ0+_ge;-/pZ*(s5x#A\ r߮ g?ۺӪ~ʏuFZ/0"{lsC-RZXy`FcWg-%TP/<;i,80XA!!L1Xx]"B>`(,zP!e t ELUs+TR!T_"K]0GfZBt`4LH V,!aHT)tΐ愄bQfT:0XG5-0Ǣ€S 㩳*~uG$a/ѠưT &4L= mwD x]tG"h U\,s"1U c)MC!=L i@)BH@BȎ8%`!N"N8Jލ0*[8e!pR20B F{!3Ŗ M"%C$\(L1#SxQN=T+lX~e:)&ߨ?TV!M|>]0n<4Wק?dp:+r> oO)ΫB'wFfV,5d Oжc&%uJ#IQR`c{SGɐ0'ȏ>úv_$^QE38DbqQW)+Qfq qE;2d\ ބ8?*Ыk@=cr!̩bf> d^^ .y>ne02<Ќ]gTgPgqm2Ls)|`c(s B[=Ts񹷻[P-.j5_+5ARfsP{:ܷ~mrota+Mkg}! Ibfs_DOOpq S0GԎ ޲EA T:N3o:s9 _/apPu 3tMH#J"b+}E㕜KD?z~tj/ꌛU_!أm@s D!<`L&Ǐ/浡Ï2L|u܂CKkĴd@_eߍ|ܣhv8^@_v19\$ Z[iՠKw}oxgqfw!ΩKWCNZ=m'-[Z[5.\}m9٪{ij `\nKS^[o҆L3z6t͐ IM}꭫d|JoQiU5Wdaީ7jUZ׺-[d)a:l ^ĆӒ*RZH=z:9 S&cK!8Dwɇ?WJB>ACb=dJ3+a"wLNX/0 Cl[@gVgpq2xX3 ߎ@N* tmpW,F)sT["k}J; dSTdiThx2XǟrwW#ݽ-0>=l>.35lez@crL`i 7F-8XsC.lz>[ "Y|WgRM[`x>i_{QTQ$Fo#|eG54gB>PDD$9}R^b dL+h B[\:K^"BPQ""+ iUI#QR Z RX 14o-m_fHcz56oCYj|_c k(rl#c1:i9{TUY5Zb*e{UuD؈ZAkMS58 ʉDãTw EtE 赉*sğf<'> ==&p;kl~FaHz#l:5m)P`WS: Z[vO_4=ڀf5xq5V:+1?ӫH~;?u2{mTgh9y)=2Ѿ%)ml e\(EA `I!ȐKNZBڰ<@oF̕9TE:9z$FXFB#+@Za*-3ݔ C p#oK%J4H=DvW)cjn5W&kArʀ4MUtH˜,Đ24Jy5 <48}dg5af3 \m} Q ge6!yf(ck4q6a>Qo`dD̷@51=ũ*etTbHxVS'%2z-^[5us C$.]S]~ ]I Zf3M>Ȝ+ǜ%?֯Ϡ@(I9a%ZA`mجs ^ӆm4Z:-nR. wj-vy uҐKm2mpmkS1Yjl\lq@8?P$2sBͪy/ݠA dj`A"%")^=Vym6k]u/ 7 ~_j^JPkyF# @59;;.&SM!VHvSἁ`iU"-C5䬜E DOVLe]aIR_=]/d19~Q=)]WPo,4E=VM¹}9?mɱޏ FHBTS/FEĶYsڌJ\T+wEVqߝ5{v?Hp >Xr9߬>vxVZ5_T^\M'v>[/QCFlgTkHjH ,'ҹ㏴WB|7 y{Γ k4cG*Nxk&gZ@R@O)ѹ/uw)A%'woX!㨗?FzTB0r&wiz\(%KKO.ZT6cq? RZZ=\OŪ_6 [!uin _WZJ'nP5~^ B\\FzB4ϗӋLte62E\%֤ZBb}|UMf)dЂ0/dJ(qWN?)nȓE7>r_5Zḫov= nK@WωfAlT*=Zat|",K`ښ' Zvuݹe#I J5| OP:l遰ث -6 > d$;K8C~Ep !Sh/^Bϗ9/|{*J[xdGwo:A_6AL"$R(6ɶ.ڃ3 fuM]Ni꼨LHՐ;Fk#+W#I6&|㒳>S>߹;!0P"~!f49轋ATt °,+mB,x}NDHJ+4G>?#4pm2;g d4]qT4jQꝳ`,9RNpL`,@< kir֑U$ P mٝm7j9vڦT[zJ?gHY-V31zk]!K3Eg2hdB%;JwT׾uYCE;xY#.:"Z(%OE" /bi <[ݨ,k|M(͎ݴ|r(EbBPŨ3(郞((/ )K>Dw>m7jm7ߛFXwyJY9!lbHŠ8t֊e : R6b@eCN&ʗ@+$IR FP5^F~pQ{&i21mh?p8%Lf dۛ ~&wv/!4>mu}ڥ\gT Gim:]BYS& } Osv|q;HD?}`R-8VM$n3kC"hsUts-T{[M\nno MUX`a\Za7p>3]>6XsMkbzUNrL&B) Kב,/;?|6Ȣxwp{ݸqz풽twsp{5:)ƓA+|,8 ,2'NqpP nN'%:B׃! SK&oܐ1z4U&=U%V zwvAj!ںLQiw |M<9h޲1Yw &kz1O~hR쬑AF>VnLsG"0adz=a>kyj1{- r{w};:l?_tVe+6T Q+R|@GvVVZ~T\3LJz;Jl͵QgYV=@۽G./翴C0!6{EHxV?r*Aw)˜1R_Y៦;㗫w .\/sp$UmphkCFJcx0 J,MQ*X E}5 ǖ8Ok踳|o"3ipF_KMXZ" DG= $ 91gF6PksV[ ?~>?֚F\@*j w1*筈BZU*4*Н4f!! RX ^|XZd%WHJp©zn?n'Пbw0IF N~7S8dM; fSSEΓiKkrv534omZGdMG-!ϼJxĘ+SgBWI qOX x",|,ڊ^nX ( uD=hZշtk}\|,ڊjOŹ&T=ҭ-rXMszi[J.8gѽyX[ (MkO|T`9X%^ > [.KrP򠃴VB6*ѕ0PIFϜ*<96ĉkFD}U5@Rjkf|wrH/kTrHfH>P(PI͚]&&679j7,IZNѽ5+f4+yC:3_G"9`w?x8)ݻweMnH0evv8"2;:U8Z=b":f}<ū"l,2K$@/.?uuxeO \{y~5'4RTxaߋKԐVF_]\u9֟-NF*>#K916עGP̓OdpZ'0vc&&*LI3OYOtcX8*Kp>ib WŧmZ﹎xձډˏ?4K0q3<ԫk3F&kQzr|U:m)xY("(g{ޱ)ZIjf%ArX}4i FIHχ|v XN?`x'2A_]Pi?.;i]|B/GkbwER:)#gr5p2~wٟ{FPC-&_,i_Bދ1:_;.W:b8HJf&I @EϝF C֨؞EI9Ig5amCeVRPD@tS*p$  ʖYVE28ch#VMNW`!u[j@J&A<C*pC9"4ش$AQlK k<ZuO0iЦHer2 ՙy*O=p|V7L ,׃I8ZЌIIȩLh,ŸVLXM%Z{E2V(*b/HgbI@ƄfYlZ8"Se}9Ĺ TOż`nw{fɻ4do',.yx̓Rym2bs~'˵MM[U j%%[TtN[#[yOt~E;!w,`ax,s)LoMm>ϟcg-rc:_jLfA8,mzĞ#cNI{:~/ [i, MZԲ;Q/jA(Zq}3..ܯ7ެsCq=3zkuݫ&5H~n(n{{ah8$@Zy{=;pw0Cym웒͟dWt~”S8xtZcˣ] 50! k 5*zR+e~\w . jA^¡m^Gnʅ'ǺK~cu*/% cāyvh ~Wz#YX9FMׄJu؃@U≝)ّڍb{!zx-mM$7{hRۛ5?س߉tq1VOh&zz&_(EřOT/ݥ6Wa:dCZw JjQ Ho'(~p"תfk0ŴcE~.bMOI"/BxMO?;h+BڙyL:paxz g}k JZ۩pL~GrC["s2榣$$-eMm_&d< |C>5#Y-nЎFU1:?f^E?BI*Tn/駁Iywd=STL+W룗9-ʗ jBAG8bǡ?n1AY^cWhIA$&QE;Z5Wd!Q*ơPV;B/fMg Os{ _u9,b Г/>u<ׄ sAt:mɆڬ={&|1x׋bw)@8V*q}QLkQgmaLyq3IF* 0 D'[PAOxD'& -@ŔLC}ݾChםɋB-a?|hOdb3dz8_Y`v:C,>VkD"bHp'GZ-\5凸֍&GY1mhNq($/у#C$]֊XT~_*X,9k4R;bt}5PލCksQߒ+Ņ/4A$HRт ƹJj%FG2Z HyЁ#fi@ oC? ~t{$W(ҋʢ* ʉb5&Rx sLG6Ez:iYk#?+ޟ쑋]$.Iߜ0/jd$ALe#8곑9|Eχ}6R3ͭvp}HvvrS7Xr#?ͩhT>U0իA\?LKWkmH/w ~TWw{ @觭]YY!% _ (X԰W\r#'u%0IB50=[,e,0fóì[]u{ӂ܎*N&F5Ŭ_4$V3TL#ar / ~*nVb+J'$)Rl C\kh/(믿w͊w| rG˃Nt5x_5ؒl:}&#]_ g`2w_m!> C!w)@6+=G}-Ckئt8D;kz3]k`Ć>(Xn {Pz -*ah1)n}6Z9na2i{8E%j FnPjk^DŽe\:g)D4s;aNuԊJUa/Y54W̷}_DudM2W/Sm!Nw"7r>X#؄CEQ 5tM/To?j< f._e&Kq|Tl^ngr?_:gn8;? `|6mj%I}ʼ#3C&>i<`hWLxi6e/MY)qφ nO/JKVH*b#,i9,NYk5S?H.ېҬ՝^!jvUOe,7K.H 1܂`i)5=' (-}@ rGK_cNHodHm~ v&u}z;lcp*cN@ݢkxjCu~,_(N @_rOR4.P|9<3e/@ݵo:˄$BuL+h1,eYku2@% )c0^&M$Eg.J]0btxG.eed)'0 IdU/B  2{֗;Nk"ca\'SmwLpWz/O?B ӶoJ@( )9 fPePJDkRJ-,,tQb%vu:k3r=rHw$`luǦL[Щ8vR77m"q8i7:s4"Wb4]XG+XNf8AxfgRBlؔ|2bưi,+$ABlS*ou)E:!V)*]V4Tʖ<-(I$7JֹѥșX*Ļr`쌍:ݨ[yJ3Fyg7wg3uul&|]r${v[tgg+G͟#!4Yz3rcR{k`ǍlPSS[#1+tX@&T$[pALP0E0l3"a z0YҜX.y` %kQF LΫ CD7 3f٭']$qn,[s^&Du3hm= ?FZ X6ԥVOKŚ@Eͱ2u)M딸 &+{|;A<|e cC޽\~%L`}/WDJ'BH`(UЦ˔ߊ! }N!ޝ ā10MC)^YeYge-x wTtr4J: rV\)^x5.R9^9Tۅ_ad Y7M xVnỴz f ngMI ;nRfj:hx+ԣ7Y"Ȼ`G@8Rxh;ǟƲOe|T 9ЎZhoNm֢qRcl#=P8&ΰKjpi)SfpFkQHV^³ Z`%E  QjSBp FwDR5V4Bw~>e2N[;?bi|}7֋:_vk.G:NƵ٨K1ZIGxF y(k _ъwӇTUYퟯG}̵ t?ĤN>΢n_pvs{y9FoNNc=YRߪ[AӍA |UI骦:X Tuȁ58;wB ́ղT5A[w~<&e9[sK5dd=mMO{9vzO..j>]<>8Y?'ק()ksWRj{%剣G@Z[B^חh?/1} M4n$AQY2X\ ^36qO}Y>*~Q_OllLo"yҫFu~̟.޴l0krğXss:ć:u3E .bNoGdt!>kQ1[-hB8kџ_ ސ2y_ ^|QLAhu&R'Rě|(RY+=ZO*~'5ʖr~,jfcڎɐƎ[ǵ=8SWWMcz?^{=*?7QMU 5 }c7utoƩ=E:*=afwGƜk88o&Ȼٻ- JGRNg7!_9Spv9k)!WQ 5zp-2NirrIl=µAfYxՑv6Jqxn 4-0ޥBqEF*5] ǔD"grn~|S0;d~[L2c#hI-$\#joγʙUNHϔ0t!/;.Y*u@j)D38i6.ڲ,mBB 2`\ɰa?>9̂/iZi}U~+{3ŧ|qGK0u c}Mm` U,zVCl4NZ$,^hrw緭B"k??iQءD_ݥaftuyFI aaTȗ2 v Ϲ|?Q08[Y]^L1&-ǻ#mV&dYj ]lNt^ۄڙs[S:BxW4V$QYʖ7¡To3;hOR;o;E{B5_l1Bܯ&;R/݅ґ!dsBGBL>Ud%g*ZjFzt}RʊD46JtN>;?XH'Pe4(Hɂ#Qq6dWJ2A`4%|4 PPZu;/K $!('!y 5*TbiO^!]=yV8XkncV#xSʒsHN:kFŬ`^8P_ F|դ膳z]3 m"ޝxݼNPIl ԭPwWj>>Bn[Tj_Y[>wzxU1OU?,DB6w`}N`I|Г}dgۮjl~sz5o(lh^ .u lRW9~y GzL}>e,u]XztvQo7ќ[\ccȵDjiSѬ]7M,(67w,wF7Z +$sOSџxӰԇׄ"|tyw{uw|b՛XN&<7xu4D`6rjRM59y>]dZ}T*,ホmoh {4u*Q G#aܦLK uKGN|sw~;)g?{WI /{:~ -ݗ;tҭvNO΄z_gAu%T]4i(#9⢣\caAXyrdHP`3kMqm>D# h4x_]=F->d{/~x5o(>dah bfxۀ6n8DL%vylo@逝2`-߇ .ƞ9VJ \{e_2 B6WlZ `'N,&fQhy]L;80O$vʂOѿF%Dx?~ O+_5͎OA(2&S~S"*828VJ?W;I5#BeyLQ5U4neɎ9ry /K) ]xjY+1*.|RUʚ~JSO J5 䮊ؾ^*(ȥԄRqX*A+ֽ<"uҭURDuQ2j+Nq+.;J\3,$ MRmZ?\]:g*qVRVOy C6:M^lu/wHw<`tr"~//[iybYؽ`Xeo8:M 91SuuN6X vv5s2l#5Q&plw~5!LR .\h_n}- ]O(wa-Viql?phkJ|tǍZG;Q` $M̚y1fˡŌUHϘf^]0mvR#L7aI\yuY22$ 1lDـ̦C,ʹHZlVUFq[bvu+o&jݦVRnnxu@shj8-xkǔt=mg ZFEeY>HWX- 5̡%q![&Hkٝdux/4:Wǿ2ƚF[.ha8% g;~sPJb`bh/gRVh^WXԠZ#6[fw'= Tdn9K#'i2ƚaw]&Bq;D%_i7Ɔ]V ! b5 M>láSk³գ(QO~kcc}_WgƎCiY|-W";3cą&'.f )~kpQ|Gq6Id+Aɺ%fn*uvj &;@O $N6s~D JRͫGOƷWf|߬8\XɋaT&]eJ{ykC0bdj |`ˊY%$xϽ cr4 Bp^'ҊK)Lﺬ5m]_]v' ~eNnnKsNs$9Ŝp)8? +U( ɽ1~ؿ}# f+\s%xR Y&Iac;C8~yN?BSk%%Pw:1" މy-J*Z]/#J6A 7{}ѡM@&FdCDId(˗ n1sQ4"A{0꼌*oi(/K[;~ňgl"9@S<19J+.id4M#"+:˦1;[_,N5z#׿uɦ\6d.˛GvYD>oՖæ\mqv[McƪX1^WtŬ(ٲxGJF+̀RBc!X [˿~+/n||eO>wZ?LIHKwtYe8L=Pd]\)Ph`^[smuĮ u_LF ZDѦNKM)MZN M2aɩe]WxFr־rһs6X& 8dԅMǨ{vFoz$U.Y t0Q(o]P42$ofuAA5C&L.4Nmcn +%S U*f{6]o{h2iTjIpM@x}:F}٧_ƀhMU'1M|7A膃 !h@ӤvPbd_&/Qq?F&Q,9-ڈw[ 0Iaiϔ6ݢ(n •K8@a@GB%\BkaZ4tŌH<`W\^{ e*I-3KY,{e!lo1BrU7L헬lrY-N{ Dһ`By38ƚ!W覼vB ř+D(~{Hq% U,EYXa0.}(v2}Pn qiH\)8k8-- ^|<f4H|jaU69Z4jxgܝ=!!wtq6 GC7PI҂U-*laⶺAqfK)0ko"Ês}b0ҦUbmR@|#| pdFjͪDHnZ2VWEw9 Bf(2Ȋe} GF$p2܋@)I^r ߃!c$l#R?h&u:)ls9!dTZHQt>$Rbl ^K8 8E)4|xz:Q^+l&ob/ 3`cך ڈ;غ3XqY 26ۮ4셋p :i\Zұ\h^C1uܲx@De޴f/!~+]z((C]>cZK񦫗+:"h__Z`g[B'(tYv=*O^B48]pNQ*8# G3.\hmn:"an}IxSzR'FO2&@+͛)5<ۙ)ul<$ej3fIEc +)I$aZ6Wo鎕厅IzK۳bt4#KNWMI6e+1)J"8D_UR pR C]fVYϙ>ʞզN աWÁA7XgS{-!x?xQHa8c&̏@]p: y3 vv  ]_v[/3WPAɋׯvezC?N)nps7&א99w~xp ^ s/_xrM4Qu@/N}՗0F_Q;,F/g݋^vGu}藭{ϗ~p3;~e, Wř !ӷ~*Kg_Қ/:%٘S|q~U:tx]|+W $s0uwչ^Cs3{x75n9U z(80f)S_—}xäVd)\yq/''?{.|=XU?չF[~~e0 1t]q_ 5O=*|8(_]=su~|曝+_\8y᜴^Wt$9%fo3~D6^}W=H5L\/ǥ+~S?^%;꼈 a"$^'/sc=>G{֞w;=ANn"j_|uty{j? dYp"M^7x3șXk>6Ž~.ϋA\XG'ɺ9 ֕s|";}7''|ա>~&8!Y[8>NX]u?OPZ_Nrr^1IɆ@_kszY9gsʞF ;[w§qq-Y-"&0;)2aVKp FF+7;OJcXzNE-$$l϶HE-l`oł8yYg)N䏂BaG@2` HQ0<.EQ᜝џs8xW~]o,jq{s})^]7bO pPlo/.c!8;CvPƝ5EW@)Ǟzp[l"C%!c,'G n+^T3l1*oi4іFwrt˗4e8!BxQ ?+Og=c [8-e[_v|Y///<_v鹾4'KY=YRVOVy_H-HRUM(bkg_bkF`5!֋ a{aCP&Z"-n&a[غk^BBc[vMzbR9K!,rqХΰO_/qpb{֤ĪԽmk)D\o]{Jm\{h^L\[IԢD!p ޵(6=tHT]D$-/T Vgi^BUW U"MA"?W^ǘ֑ X4T7JBNcWE IQWR8z86[ &R7aBŹG\J@^l*QU7|=URjڤ-BR6x&LְEJ d/6Z^ob 9`PX?1_,XkV{UT/CYʗUpֱkԄ u+wZ{,KDzT{,KDzr*mZqB?g:V[hfRи ir8 } sb=2ָ^ TQw. iNG.IL^R4zfk>{GjʾB,!Ԭon,+f~㏫Y3kch*5Z\3ŖUt|fS>jPZQ@.zOǣ[mwΎRނ(6.45J׸?hy@+_O씺K@~a Yl'rѧ7kpDc|BwԓVyi?OT^E 'gȅo&oYًavvƕ$ c#?:\~L6F|s ^g!g*j2{;XL9\*#7raI&d*΂I%m=YYN/n"d3;ᚭ|4;{YhH+Vhhڸ_4ro` $Zǝ8h& iVab60Nn1lrڿFUd:~%uz %"5~xyӿ/,R`yYr.PIӑOaʼnB3 2Asԍxdr= gHKlaA9Hve>'y.bcI>'y$o佻$2Aid&sO!|hL*(9%t,8,AbևL`-&_(>0- f*8݄3S$+iWtF7y[ -0w[ qDNyoc4No[^%W7Z,*}m}1<Ңv]/A)C6 ,XPm,<,e@VFjd|!b^.ߵO?ӕ5sqεteј7&WcJ.dìu%ՄPC^%>&.-lcCGQd{3O=)s>{|{V)J#&% QpZs3>$لL1oS1 ')YAjn|8| dV^>/Xxw"Vl1Օ) : 9dumR(M(r.*Amc=Awu`^9 ܓ~Z/p{ ^鮵rXڨ9Wؾ}6J[Dv\2e20+h: /J۵X[rw+-/(0_]PW߫SmK!SazwDP1vXuE28~rPu"ϗ[(lwEm/jՙHEm/jNԮRƉt0Xa76kQJᎁ P8 1wȘ>]A˗~a_| /{[T n_S^_:yLL)c_Jjʏ緀F pTFﻥT߿$+(.{Nsp SD!Txzሯ9hZgm8oS5;ߴg4}5[%H\Y8-ᄮcV!% >OygPzyi Ou5+s~@3R"sԾRK׼|kUP:t|>|yBP6zǵu% Oi3ԻLF*]B h42Z.5A.}0sx,V&ԏά'sJC7'zT}R+ RmQVN;9i̩ /$f/h*ș-<\N}-h >F@Vp 'v \W]z , ;J#]~ŗ._ǁ@8w:gYNOd&q!sA#-#J?ՖZ |2¤,J.UNx2Fp4. p!uc>קG.}ʯO)G{&('%hѱ1g/`X2SV!s1 T:TO"g -|;~k:uIs[bȭC!=!&Y<8tŝɌ("EyJ.-ޠu)Z!"WRtp/V4 KV^6L76))UGwVV19DoϻSp}Eo~yӋ_o/<Q>HWwZEHu"TĢA0Ǹe:|S8~d o/@\;H,:̨1Mo ;$u@9^acO⛋ j]>)ũ-CL|87uutzq"G(cd(RI"R1BѐLAmFENd#s4N]LlŐx<#fLs0ytFi/NJp'WN.~bCv#,UPH1trܱ 0%т׼= WE'O֓&(78$}It^]׀~Z5PiѸZ5NEw{;lu;5MR/$AnYpJ̌?/%I"/Ye/"F<()+.myEP&(ZhDJgWY( :$ʜƠ ^񲾩hA%_b-b ?a0>Q@izXror) CV{R\_Rқˊ́X8z(o{l`-d$.3"9E&0da\Z-9TȂNiU賕2BH&K=&|Q)=v)Go"-=!% ķuMytfğ>`5&k Nq4V)B=kʙ$9*ENFi&sW Vb&e-47"P !E?Uۀ.{ld-ZE1i[w@m)kmEV".:ǧOeJ0Fk2{gP_&) %̅)2 ҇xAvD͸ ̫ E(LbQ;%ޔUG@ǭ쫉sڛ )a$ޛ myA#1*kP7S`ީ"-EKtԹ!z/$^HX-(~z.PTd !G^,6AEڿ{l`-ڤ] yEK%vUa߬4L2P ZbhGq&dZrB(P! RI3n$` zع+AF@Tt̽E;4z 5*.+OP#i.xH!0%iwVj>jR Ԣ]bC!AL&Dun 0HCBhUՙȉ 5nP7o#̼}|CMeIyYz>|H#X}&:) ԿjHs<+˦/+Oٿ$JwS@$x-h$iUf"_s6]%'z&_hPnl֟5ϥ70 ϊXpMY:ZS:up9$bcXXQVn5-mJ%HShY! G;9>)#nU"#Q,Ygٻ8$WgDX`g^y@CC"[؋Ѥfw5H&,dgU_DFF1KQvVE0h WE۹(*.IIeܫY H Y3,u%z0KN%2ZaGeo# R#e$ vPoעsYf =(VM8Xpۨ_h80hB)'t9mHE:mu2l Z(jlQZ딟|xs@Bm>^r*RJÒLnoKC͜ R6fd9VȾd\R9)V0b*MP?JDb~vaϔ Il&wll7f65;m3AcRSHzZn1ؑ4@)xi~zki]̩"avT] J|Snjǘ (ӛ͞xm{.>!v-#g9;VIwmV,8١m{t= ѱLjpP{=< ߄۹a7o`[yL˙!#ݖat& c3W:`tLzݰ ^aPY41UŴ8T TotJ$TVv)VeDЌ3G@f{{o@mT]ç=Uu hL!yNYa&vlR %Ǥ8ytB'~|z}sxöG~Y3YfJ,p(b/ycєK-#{g Q9!JqEz)Fg3%oÃ@q9O%֩%W* <-Z-ȯpo":2l A`0&=@Ǿ6y'N$7lit*-Hz{5BOd?|tiLƺٙ8;ә~c9~>u!Čs-È8* ŻoO6JCd\4Ǜh{ѬY }?LkmhIBTNn 6$0Li]k1R*eYۜ^5HN;=3O#4Sυ\[I;\eJ n\[UrfҔҚ!rr *K!r*-i]4\\:{{Ȏ2!A1jy/yx]ӻ=RgG3fK`ɥ8-_N_6v+ *6ǣ9%FFa(o+Ey)Frn؉;#ZcQX͹6 Ff]h#[&z}i0- ,ݪنS1R˲vc R5.Ynv\E^wW ^e8hVtqkRCr!kD&CJtj4'2~opa~],mN>#L;PRR.3x;&0#Q=, RLg8>;?}/KJbT~%3ZeS@LUK%H)@tKe͏ :sL輙v:,3͊95D`DOvAsx.b'.[)z9 d6cڭC?d^ý,۵̾o2iRM sm} =9 rCrqȼ1_ƞrU~ 6z?j^˛x?@EX;EnXF?)뫫Ei'UvdbgW&\ʼ춪G:K8)!甚!](}:#(;Ygs9OWr&b =}@mVe!;q~...9&63%}1\Z,DoW.ʭM ֕2MLE~yu<%^7wv}YFJ~|A+6lmc̆b ֨ZLJ5[KcQH(Ytx̊r+;fWK}\WԒ([ș9 5 S4-TM]Z=yN='Ԕl^n"~觗pE޳˓=HTqRܯ W+USfȷ8{~NJM$#g gJE]"}Eʧ?_?|b+(+;1>"oQ?uG(531 #WeL# W׼XRRR|>3'N'ڣO&HzߓJj<*:像 a/]wpb"϶zJNlBHR&R-%ۂEcg2Z}X$=iޛFz:~(>Q2[BdUD,FS-QP*&SQnԶFySxSxp_k-׽~~:{8*VOOSyMm%h͘ ;[@svYvx%|m2lɦIm 7D4 1kRkͷ&1j%dgy@,&{+sW]z/Hz1E/enI3므 V~I t74Q.>,~ou?Ml{ R1 "W)nOXq᯿Z[TB3NN],.`*?٫r ӹK]u_=AXLK$nS_"veaqf)mlpƷKTgPշ68i3Զ"VjDEuY>:b+fkL屇m)ɆG _ҞaV( 0rw(lxrŘi[\v\l6m1UKMB0pM"Ù[j":Ȱ{kL1& _vn=QJoDWp7W]K\M {9k k)S+[G`,WW}ศ!t'8uoyvNú":YdTԩI"ID(sS~g ]xdz^ׁh@@z`/*g/"Bo:A`l8ޑC!HL~eVPуfT 3^^x5^eל"؟1*bZv {ёeTv;2էJ { U9 䃕{ȣiŅ/ 6:2*qa8] L V_XA5_Lȡ@Pz(\J|A0Pr1xsezS{QǬ {Q0w1N){V{Ζ -ԡ6~W.UGdF2R5y*O9yإyTxR62d[T %UyRw*ּᏫzY#KbdiT,#KzULr _'?(7WM|^=d3r@780RޓeLyAWs\M? &CYUWmtl?ߌ_xӋGb+-;(~\^cPT-aQiv8<4v1,RTZjѫu~pk;"r WjѦaitKO+ b$=4߂jЩIp㖯@_  W ?:;FQaW0@]}蠂}o1U'.Ιuj;a#~jGrҭ]Mm%a4zHPyygT 8(vh[q 8o +db1X68?wtw!oH"ʽ8ǝD乣5&MUil琭W ( Lm%0*ܑkhfr-Z2sFdNh7+ $L2H }.aQ*%˴zWN-=o5 5/O/`MVbB9o:+v>C2*PԊOTi*?-RT83E D$yL<*Nxinb {"qm16Zp2Z|_V߰?o&s]{p;I_7դB7/'攚dܫjSVZ6^ɖ2tu]onBc-J"]ԔGuQkt. j 'U)p.@Re{d˿KX]POVX@nǻu4Ew߿z/HRquydLi:A<[,}eO֞=-=qS"9:#ƨŤKte?fq\ƱIeI{Mjt7]m)ttχg *h(GC{Er/6BQ[V/S&SS#uwEwUGLh—5YKb%u$:fa&Ht DYk"EkwE(q1ZJN:"bZm)CGQfQuCb #HF` 2,R2 h_)Y%2|V?1">/w_.gn){ ?wo u"JrQ`XIz(|tβ*J ߁'ՙJ=E_*5Jn/e䓶4=9Un|a0}y:dNyBX{ M{D]692/Ћq=9 n uШ3 [YH0 =Ya{#@>8ZLp!1q=V{Շbw; ;:g"_z6~rES C8ZT}setRjez@ nZp ڭ7>tEm" EZ\*YKr$֋DgJEҞݨر ,tQ#wnh椽n.^ 2v_l.~קX0[<_ &fD}Mޗ}|a1kbzR/۟ 隺K\QU1vYrϧQ[]'fZ!OP.<[kQW=-BO{EK6Pd4CҢCDCf%*WRUV(-Z.ԬEKŏĘP/)7ĭORݴ8I :j6L aZ l͆4I)!iS09ɌaΘ<ɬSft~ߛ.}oV^yeZ2H2sr~.v: IN$ B;Ez7{t_fR#:o 7 ֦NXY-pjo3LX䉎ag#cl%kN4QUtB},jw#SG*X0B2>xLJfd{&s[}B.>VrSkcJ3.KNw+Oյ0R ٨=ܠ=Cϫ9MH32snL4\ FRlOS4J-ݗyuF-j"gN4NjЫܜ6$ﰿS<ϟ˨H 8NYaqgQbc p9"DoO}0@d5[^uS+.a6P@l~i թrsW N1P͉v="to9 5cd@C Mƨ&:T1\Qc<XF\H?h1Rqmw!ލ uFB1ڲ֬&cv atO1:<H Bd/- m<:;֌h? r 1X՗A@c!fYC, ,3vYOwSlJBkM<6Yd''CE/$ 熼Nj42֐S6IggA&^;4QЊ u"&:Tw97~o.,></ $omu< ' ଄,9ZֵQ`y`ЖuK]V;ɋhOZB~уk%*a;=lRJ*#D(Z"c<(a*p{ 3c `S@ˋ5A*#:֠ $֣y U4+|d `yᣥXRwTz2;vRACʰ?eOwi7ѺZ>K_|I}@ۇ/O*$l?shTZΎ cI[i^Ƹl3}'{i#"ΆEw1D1e妳p~5xQUBa\tZ htE45X =ڵxbFaFzBwdrKNӥJh@2xAStN~ Ot`q}[=o?}÷aq/FHEMi yti1 l[$m摁OE%4$.+{щF(_/$p]NP%bdQN(%d8e[XlY*egaBt9~}LONS/_<-sԡD7@E^T1l6'ct(ML`'OS&mxDGU=x /@ Z9am> 'ډ C|)Xɩ#rz:rHZo`2j +Z㹀ssk&#vЌ:Ғ@ajcв"8qd7 a_~BN0Q^58%4ƃZ-H7G4 aoʮ@瞠k :]\R7r\@`z:c'i=Os<6 ճL Z (=B"dW5g(GÍ~#&A=i3e@|ٹ43c<Ϭ\ 9`T/=zp"ڈT9Z n8/ag8\" ')ZW_kQ{?8Ws_Η]^bwO$?>ndb"Cz1sN۷Q-'>hX)&OlؘME1 vzxL,徆e9e2q9I¤ 0`S3Mʳɯe`v|7$n'IӅn&,L:d.rd)+-ywɴR=cZŘi1eZiǴGv\%BIk-cxC %rLI,UںӔ'{4]hy}F@0LjQ%BKDe-3{9rDncV:b>|kӶ7xM4QZ̏%19Ĝ=ޮ Ƿ{AV_|JWݿqo&?L~Ҭе6[˿_4P~w{R oBW~?\}{e>ڻyřbvCnoqWW PӄΎϦ"yN}ֳ/h+xۙΓ7l?|[?8vDwwIMV2RW?q#Z ]ч0cJt[TȞ\"Nإh=3v'!!q3μn }:8_'?߀} ^:p,x~)/dT<w==w]mկ^O%zzpҹ?C?ggG>}׸_=qt_iPf+Mpq"SEOcMՒTT?7J:<] P{`]QL{z@T+SAGaN\BrtK&z(DŽ_#18:SG:b#F2TĘ2$fбp\a3?GMHbcEsz0~}Pf_'gvs0ywL#`f|Vl; 7:N#򿍪Nʖ0@Vt>jǹ$"'$FC6;9ɋ%nf~>ja= ZJQȞ~1h€=+__3Zj]gJ^L@3p<-T%F)q>x w4KVciRxҵW1lcc< qkY ̈́Τlf E9]Ý|GO=S `-9fO޺{"c3=))BwiZ97LY3y"{Jbw^Dc}e?"VFՃpYNLͣGqQ^W aoGVR't-K,v$ ^{ŞxJز%zVmTt!MSazZc0J}6%6qa(k :A:kdV-`)wo* oz+0&*M.Ij!Tޚ}3$zy$5?MSqT#`%-z⺚q= zú9ZӶٺɭ*΀"\C>/#@Pϭ`E+XV"/B۸YV"\@~PWҳE"9یyM"~v7OXr bs7l_%κ.dG &]_yQ.aNZlgΪ} }ECw]4]wVOΔ1!ap&ty 1*"YnFkZ7d~ FWJ VgLY>m!@d2FO.{4鳸,^,u&fZ2/XGC,GzE7wQe|>Gug.UK+ã~N‡l^}`ag6g Yya6 x|1*bDY8!ЯXF1aRxԧ12, a"$,14!$L E15 O_}~6]Kp@$K8mېD%ހv$i-R'ĒO#/hho˿.sޟ=f㧗';; ιUCV*Tu,<{ l=3 tKdR "*ykuJe9_:U+*>.V=I-)qOid'WGV9lh @7ÓYC o.075oAb647uw@ T![px A 9M2@Jɛ6=;xfaY"oltQM.̅8m f7M΀gQOƚF̜5L;-c$2N6a,ODXwcA>$1c.>x ؇"$eV;'uC:xg$4ֽ4d8B#an|8VвG6bk pmCω6Y ظ[ٗuZ)'`%PwU`zV)k*0]`ZFq+Xd #@ 2?= ᑹ%fE&̌jC xW ׳:4uJ: Q!E2ͤ6('y.# WHBGII¾Mr@@ewm|b0 Pv1R$"u}οXrr [YV8Kq>aװ6wCF.B_sc]JyV0c̉{-P0ȭ6)3Z͡VsP%102zގ$YQ!=+b( Ʊ^Iã$x<~$aAƽCd5ZM4:FQKEHH"ozDH0F8 ;b )ׁ91eؼU/.^|QJs8Dn,vHJv]/KxH86K?csQ8>:ٳRpf2yTJdu٢A% mRV~S$-O;tc١GSo ɂgm( 7qIR%t.PB1s8̑à 7Z7yq:]?\sw>j8ht@ZzJC/uqP#*@)N lu;jJ/ T𾞬M=h혪C ҇w!1Ky}{t=*i5m"6!LД4kWVd ?Ze$:=yy*O2AGRGKSjcذVǩZLXw첑 xxqsJԩqfj8T?Djd:-3hZl|ItJ?הƋig$1#eu{FnQ/a's}TIvj޺ )QzQd񸾁"\,Jvr.uj߰ek+dw)%%R@cB^)*.]9(kDS:[μl|)ai J\/iwIU:bL\"wͤo]IӻLw`14٨9+$C#p{v^M7YR.QZCk ݯ5.5{u[&(İ-DBOUۿt墾!zn=_EQ%P ;L oxM@1Ԛ"),ZJ} )|1S/¢燾(En9gT9"N9(B*z< {.Q!pDޕ5q$翂ѣ:2 iۡ k7N!_L==DP LuΗw_eC%۩+ &I byW0gՐ,qs22C9w;XoӚ.4"^`J{{\ʔ5T)ة!:.! d냸-9Z|Ն *Q'.ń*bEWeMNjt=c\o. -\Xqv/rq+#n\C1H eШLYޅJY5L:V(ڶJ6pTS}9 In:#Bv6@D)Y&IF5B3!ҙx$x lQR(+ b,e}$? WxMmj'n3_Y19I'~AE3һr4L|Y9i’Sé4Q7"o'pe-ʑK,O15 Zs(vF-#N[tGeBj2>,Hde-BdO*~kN/p<+іDRIBrp08#$`k@~,I*pfrwtPǛTBTQ??^Ld޷MT(]}# # nz/ڄߥSF׫;Hd?9jbj$hA[uJI(R6He?iۺt.o%#o5J#i`itK7ve{/;̧R}~߿;KC{2>t|)ԧS]*>ulUlf.]yi,Qe }e#BȿTWS@ᥔO/uA=ZH/E3q=׵r3-/x?W7&!č ZKf}˖7 ,(tVA&nŎ-inwe0xeV.JL&p thҥ*2峦C]9SCl4X~l"fܿCG۪&imgn! cx4f.l$ŸHERJM7Jr^kz{uՌw^c!#'1)]SBƶ]YcЪ##Hds7 :{ö5Jމ(ޅt7e[vg6ZؙN>g(Y@C-EjۉPDc&;bL.D@*f%h.T$CT0,Y/%1sK-/9ըpL_1n=18J yܻނY(!zMjNS\[)XYBVs1} [>aU}95<=,+GvrZ<8R 5B qϵbB[ຏX5zfel릸.JMo8 T.n29@iJ/_d׽wtPXzvNbJ1|cϓ$)|<$SepD ۯT޽\My hWʤU4i7A0@XC0UՔE8nRu^DiBTXuTD6#Sh `&OUaT(ZՖdC褋Nl ('ObSXBTQ!`fVbB"߇Ȳ N^CVY70k=Hk*=*Gt1&"*gX8ZY$k&e/9/*])+fAfa1N3o|fQ@Q|R-CA /~E]|f&T77R^abEPl@i1: 9-`lבmUP>Z8rLqJo @+Ѡxj9w֤TTPG^l C052810vM'a *1KxLеV߂P%~[.!\uia%$ s?O7[~dh&GnC=#ȕ6KҶY\o<=/l#}?J|vf KvGO`?_2sii_9)՛6]ed' QqsK(W٥R9QT"֭7;gɰ=(o`~u6ȋ{.~c$; <8ww9p\{`)BS"Tׁ\`8y~M6] =z8ግVޟ|ǫ>7~#A~C7u}7Reߖ/w7_r' o鴑FE0>Sׁ}RʻQ%8+ORv~ӄ-RUTAVBȾχ`_9{Ɍ`{|"r4:ˬN/S(_RZW٨#, *-¤O>Mƭkk:&Z[^6LA+ M+* -]?ȍ׹/_TB=*˭,9xCVJm73g߱_/?j=@\[mQu/xΉZR$?s:~Jdg<3vwlP-6Lf#l^{7ZnK|E٭/=dv[ϔvw=u=CCܛQ1ViV8|jIg~iIs* ֳ(U,):a6*U1g6LM(@HA3$UwG'gkYi7UTNUTf#UN\_KVk\\mm*+.ru"%OKʻgbj Bugb%Y];4ޏg#Z)]2ɬEד6 $;#ЭP;̶+)YczZaV*'C0D*ERT:?)Akmc,VNJ,\wj+I1hQeH4!* 8RB|V|xUb45s_ WS;otd!rކ w:Q'nt>MBC{Z*Ei/RZ+wIWH;WOpBVkINP|xdՍiZ<:mk?Cx.gښ۸_a圜ڡp/VR[gvٻof̄"$N忟̐"b8+.G׍F7Зf_ *niLΰl-I{A:>gK/CSmRJ"rZDj&BD)( 5+@!۱?ɣ[QEg $j}eܺ|Zj@@Z ub+ \[~9O'輹MǝsL&i5(PTjRzpsN5'I&w{ʙyޢȧy{ȻIn6{ӟMa2u7O{˨gad{{Xp}z*_冘s{jS't_n'n dX_tWZ/sb? &XX6g98|$42gKO.Jzs+ˆ$䕋h-uvScjN1*i#[݉k$䕋2˳=q Tqb]qB*1 aޙ/{ Ѭ68_ ¨6-_ɷ:mICA5wkH%')bX,)8Q!>+O?L` G UQ:L̶mȩ`k)p)Ln153F+WklH:-'Qn\{w+~'%|x7wEt* rysR]?x)BLDOvk{.!6C~{).@ Ȅ:b"[w`kI$mO 8nl?`F>PT"B)xl&>_1t?nsYK*G&RːyyP=$Z<`WW6'>KM s64Jz4]bu1ee..6%BaSEJjLwlQ@ 0!BX-%$S1$L`MC@$ C1K2 2*A nF4z| ꜠ @ F`llJcSY'bB/4y &5zB#/P:0#t&0-Ri^ہ`Xbq`"|E%_NKJ2'5p f#6G)5,6vb! 6'X<7jd?xӿzgWarU }c9O^`@[ª}%D Ic GOS!cI1MRVP*R66J1TߠZu+LaCmv ~Yu *̒b{}ç8FEBnQIW+J8ny&s#}{L|F["0.dzFșX孭j %ټS%BXwbdNjX!. u1S4l%eikjߧ$皆Uķ{X kXvѰiXM&KQ<6., oU(N[R^pt/dV"(^:&߾6F Ȗpp QIAKIN" Fp4N#E48SQ,B"WBFy x#:h,)9!q9ivy" &g?qdxnP.zoGDy<(XSCq0KVo=oF#&9?we$̤ma]6(O}_.gm_jwT/rDS Ϣňn5Fܝ 25eݍ*um틥Yr%N%5ڐ¹쨻\/:! AT˩*ė'RN8 tHI`0 W?Ew17fpe%U{91MR*h Ur%߫zַK.'ta(EaDR4IK{o!1$ qN Zb% APF0VX! S6qoSw8_t*ط #㳼\0B`>PL6W$-TJns `(E,ע3ܙ}8i Ȫ-YTLTE&:bHYC#KmJJ8}K=x,!Z(C`x%,& ގ1΋8xWX5u8o~Em(*JDV ?n Xzl梣bXE,6JiR_CFVkp0&"!ZJr()dĪZXin`g|cTA+h續$M ,srIQBI=rIKjXX,33qVXq̉~YL)L<|+IE4D68B&MV+!\_`/[ԍ[y1sCN^y~O _$Aj竴l<<°RYif+AHYpqRly!B+Tr.FDU.KO Z(Ue'g2 oc |$؈a#P(G!clhEP> x;l&J*"h&!:9;Wi?$Xm\l| Nh:ڼ:=)n/i?a2HM݂ h)|)Q˖ $ D#T!Qj*RNI)d^Ӭf]Tn{7!eۛ$I8Rn+k{PMh-(%hs.moJ2Gwܿ2GKQJ׾@g 9r`;O tk˺b0 [3J4O"j_-[x[Z,$2ĉ',!Rdܑiㄠ?Hk)?jFg_~>q]t-;c߫;uSf8b~U<|`#W?foŕuڃא]v~/[|#o}m9¹,V%) I*,b 7ARI*uqb2m̭NbCQwZQw5`_ˈQ<1p4s ] S~!H֌Fv"n]VVjaM/y=2%{ ;ɽ]1OBfח_ܻ}A3&(1ҖƉsM >\OHFŮ`?/RPG r4&Jck1`PlS8,3L$DUlN5>[Z.S3.dEy*ʘ J~SCHXc1Da(uaƬ{Zʐ(q Hb8j7b bESE2&MrQ9 (ښR栬` ap]$X3%2?\”e {2Oт Ea 5 P66NR1'c KcEYip(. X74)X6ka)xѳ N((Bɂn8RDx-$fؓ   ALd*PG v 02K$v%*~6\XՁ7y.=AxmJϻaYtj#XKtgƪKb8Ng@X$M1e2(pib$tVs_Gc1rT7*_+]&V5XegAoH' -n}|]KqډDOnlv_<[^GF~X~~Cn-?ȏWG;~ b~BJVl(|M>E3ILJnͳ+O; q$S/] B@,JWmK6!NhöD)E J0:R/M]q)p=(G``6VUDN+ "VۤgಪȭR9fxj "W&9ʫ@NZy lA`Ot)evUYV8n{ҔLep0gPۋ L=E=0[9ε-x<M_LנZE[G>wfNKsyo<ȹl(0ǵ}nph#1lD׶ͦmqTRr%iM:m0O`@rjQ8:G4\BSTDPNşy6aB]o~|Sd@2/Ly7Ship≂NjN=UgGVz:k)ڻ=VZnZ?=gavq#u :̭BJ0]Tp/*;gv2 (#`hY6"ju&\tP)B5n9Hd+o8a)N z7`$ˎ`U$ Í u G ov7lӝdtoqe1mΘ^ߎ\ =L{Kxg%T oWj4LTХU1vɃr!_wj/A ᆚyROg[wjlFUo6ӥ fp[0Z0Zģgښ[_Q%;CkTMUNd. cedɑd$S۠.$J OI_7 pK`ޟ1*ㅜOzwv5ɧ^c 5`E+\$sFMI cKH=\i*,Ze2L*]`M_13E{KȏL2{$_\pI{p$IoT1/XZrl1W})98 fex@Dd6v+>pC bUP-U!8EHfS8ܬTܴ]T_/xBVl1̍;Eo[ƍKnAzE8hfYF8 ip28eMe"qtb\\>ihZJʋ7To#Wmj,6"xv!SiU","n„Zh^X"DI@BP@<)! 2K. <;:iu#z>g͋s'2٧g\U:-W}p@b95Zzg{#s} CEWɵ7U-o﫧GғGr(-";. φbžz^I+b}JV:~D|=mlZ"[98/}&ƫqUXbzB̅[43 9EUBu,'*e9Zu9ʘ*>n*ӓLňj Pvp\*$R_V!Q.N:N8C? TY+s`@[(ۼ=EEyش#{#c+W ʱ ʱ/(7=p#³T l *p96YR"4|J'sB̌+%2[~CӽhA@ dp16-g_1[ $  p]?H_gPB+B+q ((3dDHFPld*b$JcPoDbcL p0:&[C kKhP_yyL&t+ ؏2,0 p8{ K6U]1]uRׄElm+1&X˷ztq-,NƩVY ꘀM*1A_J]]U=LiSa֥C'PcbBLq,UkمM1oN!iNOqoJ8ʠd'ň44a X?ؠH"i&(n H^).yd94H0eR%B&C!IEx&qbB83 2PؤJfxֻ@Ȣ@uѵ KyQ|߮ȩz~;sw2^-sʇuu~MAz,_}a;C:zkvOm6|m'yO_ONfjq+vYr} ?6Bd՗2Q^0ք+_HSvߣ<ǘc)NS5EQh(1ИgIq E%հDhdFKÔ dM X%t(ofwk*"@F)wIǀ?LLTQq"CHCn;w1kWB3+ALة{]o(*I#u/HN}ihB`.}|ll 'd,EX . ")N8R fm2q .sX>iH C/:AI9m_\MPF1ƶA+ + A3$E+ US(nyx О Qk9o~W *[B`J!?"EI6.e0[cK /kT$S!@c;Kq^=1 Dr&Dܫq*P*&T3V!#Qx zK;UurXNFC *pT&8r8C< GPWLPޫ9$*tj fǪ*V~͞ոǶCޝR6P֭;gf9(u/kN 4+pM QЎr%rsn: "6yg09t&䤿$+h7:xv']lM]%I*Enޠ2|8;WVkD~'h dş@>=>=G+nk\Mln`Ǭ.< q'*pb^T@{¸e~r҇^(fwz[6̟I[ gT#v8Y9aYNKq.rexp)fI)f?9Z2b ﷃkNpivNg1ޠB=U,^$\=Y J/zF} BiGY{Uѕ^;wBVa!u焬p֝3eQͯg**A¨l/^8RX§ӇLh?1dJъYK0/pLJS)aMblTdBu  # =vlBlm‹o77l["d!5nxC1Rݍ7MԹ JOk}B6Գɣ]֭-at.X社IQhD"4V(a,Qi+x|8Wu8;j@QG6ɐ?zg?}~Pʋ+'xsa]fxlIeeU% ~}љ W8xv!+0敤{A!β84b2s *2HKC%E&%IfH$( a"3v!X82lQe1JD`W@6xp&^룙i5;X^]4 aխ)@>{z7=Än3__`F;Jn U%n Vܲ2S+ Ū],t*wO#)%?Ew}ORγyHYI$Ȅ(fpReY*R mLb2$Q!$aF,Z$'jtԅWmRΜkO u;3:q3D'ت3> ͯsJ(UxwIZ/HEoRZK[iVDdqIf/ q:Yǖ5a}FvOaRaըPxzf@㖛 [MWSA.$4TazbǺz$o縵qDv;lݺs.TYJZAx09έg,ɣoߥ݃xtDn>}Z^i6ZtϨН'ʨ'VҴsΕ1 ufU a*b%ux&1JE#($S%ƺW1GJ5{2`1`de,-քJS• &$`+UfHX4ڢ:ʼnL>SUѪ1OQ^qR;)I/.)`~ƨGBD&#HDH#&&ҒQlchňE8& `l3JYB28B )S p~8Hňdql _ItO Zc;iP_3=HUeR6PVǜbح5"8˸t1t檠AF(.ك Lucs8fe3y^ = !Ȋ80M<$(M(+d`#X$$ 38̄sJn.8MՈ*`TGsz ˷4%bϸV%Gՠ ,5 DE꣜-B?FRoajՎsLq\=Ƀ83Zjq-B AF Mi;32VE ЮQ+3 !8Su~P Wd/aHƶ\ER(|V hv.{Wvty€7qj5$>gђ}jb87D>SAm7@ݯV ;?Ww`}VۛLZvۨഡg4}:rhϴN(k2mh!k/ꅀGB1Q N6ه)ID9i9Hz p~c.ns|sqW)WT\e:TPLi3'fHhD̚Q*KLݏKTcZ6]k5<^:1F=+qjt;U[IU\MhQ -yQBzkj\Qesό5 C$_fukpJф" F1^jl̰iI./Ǐ>8E_!8,_I?'_z A[$-`O'*8W_*]ek1[a}Sƈf\WC>uyE*UX㶕 Я U]ӎYlywZ)VZޅVC3M;t/ċh:+pvɲx/Ws^pJ0##(j1^[{"t|QJ 6,naA}Xsd|lujm/-uǖ8M'xpEwmKо~9['$=Ah%UsΒLْE.P(;\;3dk%3{.6}q iSede\K۪wܪx zػdex@i{&gV,:Vi^02zY=?_HUĤVx+\>jqD7cە^`E {J"5^[dZ!Uؕ 9#*%Y(zl0UݭZ`+&.I:\WAk#Bk2Y/ą #<coq [L m.[A*IIМ&=Q-o*m6!sB%q M`G2۴i;{j4߻7VP9#Pe(Ib"cosqRͪ\gJx]An)< 8iȚq]GGg._߃3bF9Q`4x7#<^rpW3?ٔ{yӗYdt?A;4~񳓗>}=,߸wU*l;~^^H~%_va8ߛم8Nnl3o(|.~fA0'9ͱZf,og< Ydszilzh~QrxQVχh~]7%(4U3~9k<Ϩ=QV~nƓl_ b|C2_6nZ]y|triM뗃2ή,igw{Fi? ThN&\FCrpϽ_@j8HU:zݹp M/I 0ד`6/Gֳs _O'H8U7@oF_W!7 K?F<\>|r<}~ yFBq'm"90c!Y@uzr<$|шĽ?Yt8|:毾/{ 7o~~ÐO3] jxsFF\W"8xVO(Z'l\zf,28/4Ƚi΢滧3[e=:MR pI HbI [}!+LlNbZk3~mM'kfCmtr~?p7|R_KcF4s*J bTnäC*pcu~ $d<彦?D̘f0Np !̾v-(OlӘmy-Oci1r7J]`I@фh xb@2$8F&z!qo6'W=G)+ө;P:dAL:ŒKkI6fgr fk;(k j]/LAP\Z$(aXZ=Sg=1xkReT~ C3Tl4'IC$Ig\M.Gm `Exnm>^ 7ӧqs4nNtQ `L@8aV W(Ѝ@̚:qKmᙈPM, LLJkujJx;yRItÒۥ6o$ʼJ)c$Fj25!Ŕ%R^)Bijr,gF1AM4 `+e9Wcm$uDLɓ;"dEY cJkLP%QDPA!8Pd8BʪDKN,Z*𚗠R@֓ !v {jntAZAz`0sv[yU p|⩂$d< 'K/W'!D"I&'SyO11X0$s=؀[A9&5`XKӯj*"+~1 N͵Lrg&9n 5'JX%o 5l#a]WJ6˵j5fl#ŚmMûd-[b2;F!&=|c1HN&az\Ϊܜb]r''G:%b! Wgi?a O:~J}v #eLK|>_=kg]qx6wEF3XőN "oz3{qipIj]ldkR[2W>.Ycme<6OU@C4ӻ`ccPl}wE*,0X`KQIeXȤ` Gls6t|ϨBR3KM0--g 7}+A",0ݕyybL~ޒ{AJIdF"\VLƉ&+eBJrbOP0+\m-,nuR &ڽn& pF3A,EBXY왩nMdP(r&Ny`V&4-G(!v-DA&XU@@jq1ȡMQ{}X9`z=R Ӗ lFxbV d?:c%$·`key T($O AS*@ha:E(SH5_ 3ഴ4$$FD)QG=xsDAIv%Rkݠ(5߱"%Rq@ ՠ k`N@XZCt`(6LQrK @Ă0VR-$,d3ٻ涍dWXzICjU!e;I]q}IJ@YԒcP"DdV^olC`랞 Imz<9K-[o1 0X"{T}MA9hj9~||`_DZmW~yslXRxr}ya{Elny ~gƇ ̭T*W] Fn>"!BPjhUTcҘp{(Z0H`Wr;`&N)ĴB/y !8Q)#H7G9BO$J١+'UHnr@@D'7]s3}n"ügsFJvqH\?g֝ pdyð8r< XXa2LHaqkI`1#O s͝V͙ V3f<؜̬fgI, *<ָDBK`Ϭ;;*.5d{D( ,׎$/I1.=ُҊtg r½u Q'2 62G s<q#B]& ц,c6Ǜ8,0V*5E*a^ k6;#^ tֽL=n/bk\?MNF\S \tj!Zz$} />1k?W$8GF%( hd>nǓ-0|ckv#H(Z] {'f4^*RT3>Hj3os[ |`2]:`< _fkSR:Q Js w 7Z6'Bsj,؝!mrdcD3,s0NQI3es浰)R6#^b^↧hs41_( ½jqw$B{;rho8!$[qDrp-uxpk{:{f[![~;KDdH*M:w&u4v&w,ϨGL #^Xcƥ ,I5IHqYb=5Bu$.iq歗TW_t_vo{M8{SjJ_+GBcPb>FR먴ǨܘGMB*jp/BcƕX0j1PuQ{~rt~miQT Eң)Z}{O0SߥmFSݻS?)Sz:i}o)$[D{)$}:֥J^NitPzCpD$8bp}t܉hV՟DVਪJ^X1}jي$˹-ύ%$c 9D&@T`&mNrkC v?fQgՍ`:K]H8SlQ#9՝H1Ư$RV8"J;3:R %L3\*<;:oV# oQ]WF8Ry2)%8pTF0EՠBa7'FSY9h|N`Jy0+LZx3e >S 4.` +YP?j0l~Y^M`µf1ПO2Ʒ3?>,gX\79&rBIo|vb`aG+'`>X7} 6N󡙺I JG'x1s?ˋg<{{_%TNmA,ubȜAC1"ǁvDU3@%e ޴`L 0UM۷owuv]6J:#N_fnmJ7{pUp8| 6o|,(bGMr{ jXYsrU̳uՐ\@;'Ɖ:6E+vm°Y7竃d&ELYJ XUfۏDZJ dgT\>H$U;p:`*ҊHpU2ɫ:2MkڎL0XU'X;5 JG.E[v|Թ0ۥ7!an2>ћSv?gӻԄjggaNg*F[ʝ)oyח_dm``#0^ŧW)[_;$%²Cs_.v«qJr FI 0xr >UNЈ(*![u}]+dء"Ly)q&&z4eVZhAO;0V=<qˁD0l^PF}(ÕBxT}'V/3 芰TDPюepiΘV*Drj}seA5xf%JTZ$΋ٮd=t{/Ux;H&rb)J Of,RؔXvtXGbEO!ȗkulFlr^҂?lH4<Ȯ?8B7ʈ-anC1ՂhވHv##&@7ՙVppPri3޵H2sdPpbahdD2 G)ol&Y)VE3 "w/˳?.]gI}9msދ׷ĮɗۛaStHʇ en<\iM(J85x8'+3ŨKzqЎ2€ȂFm9.;N88^8Wf,vf. ߉"!BvϺ<$P6 YR\PA$Bo9N`ϩ.~֟p)Hi3L(upӃc9lH ,|"8;C' jX(C@T"֘{t'OΦL?q%@\(\Ow͜ "8r`-u\h4M\=L@UPpϸ`u'N6mjcO+YY&97d$22)6RW*^/j4$h낎kB2G#52B3I=)0grBAXۇbixHU`P^j IwH lމ|⍏`7 mAG <ɑqw}⚆kDqR"zk7 E0<=~e8Mk/"jplr-hBQEԓm띆koR#qP=xKr+?\/>$6Y9|+N{@$gG, 'qpr`I89$,b;tH7pZނWKp0&t/J6 !īV#6}{Ĝ9;T@R e|*MfUkyi'ͺrq,f[B/6ZipѐvǦ,’[3ܻ@4gT`dcKC  HSos+S6-GZRɦKsDB*5cGrLV=w9b#l KJ*5͎UԼ[5rإڐe_[^S?]m8v4.+:8?o;}ueT%cJڷA;Vi.ϋ/U[b,Wj<(3 je)/*,# ZpM)NOO&3F,M0XN%U >$qioB8yB gyJ+@OMMcԽ*2,0`:),Ϛg˳vlΗ>+r/߽F#IQD*RLߨ<5ymAQ'}5\n9FfDQыg O]#, /j1Ϩ i¤Ijwvlhn.ނvӏi,TׁEv!Ռ=qیXQ*3+TR`1>hSt9+y~kPfdS4')^$L5`pQ&\ԝk*R)gtiiȴ1&-)鏤} 2v\vjRRsQts jxL' ثvyw#ՏitFA2STѼ֩d&2S5RT.ؕMC[Jg17&ܖСGi',2WqIJ }ABNO)bqb)Dl;7ddAjokOig67O4ňSRrOFT)ن-c0wl?LYauSqU#BҬTǰw&H֝ވ7W\E{Hbdj&&Oq݅'g y=τ*3Z_KDuH n{ʍh'Y?3U'+r-u(XN!i(4*"F׾d oue@ՓýR+ fy&C3lLr PVlؘٹlsO hXCp]繂xixjՑvd5 fXR ?Bn3EuJʕIlVd &F&>lh8s bgCLOW%O,igڠRtlP&j!!PeL}|yU~ ND&I R/ Ɛ?K"K.BS*qT`J.ZkŃ0T/i{U*EBf{, 5Ҧx]/ߔ1 [`8 aK"6އ}/SK[]@ec8ngIv0kv_^M |hF~~cϋO;%u#s۰ȑ~)%1"< !l(FPb&&B)L<(їEhPj.[F\ *'uJe)j}~nTC>qvVpCFP[#:9#6-QgRXkKCVYWԘyF&b狑 킎/eS!/SL0 1̝@[˹WYɭϪ`%ɭ0IxnSIo6 Q+>:\"nn[Tw VK\Vgg!;cǶ5$)z p U?Q:(W翃EA.y0S2XsQE]Pf+nE7\H'4{3i?eTW:` ~Prn?!;@踪qXI"/&< vzrXnIN6!Ƃjk?0Ӱi%g J C;ƶ Q[L*ƒ3t1 *Y% g2?F5 t YoWt̂$]Iq g&T)тc9cP,) ^@mRR)QLXBFMQ"GsT)غ{:^+ú:+lUS^)WVѵ"r\_y ,neѡc"F(r- ~_d$R$^i03ܧg󛫯E'4,Dk7$+\0O*m?F؏ EܓRN!✂6<0=0E;x0T.oI'h3*ED0A 5dihOȆ8#>O]A,nEIPJk ϔ`$yS38,!yw뉶zMPkەkC 桷_: /$y!AB{ "#dThxQlmZnStAE!plgC%ccXk =}\B\DO7Ce՚WF5v=2ev8B!%h1 3-D7T@⇂wҤΧn ~ՠbITS"量}x%"ۇ?z?;PFuY"Cn2\x퀲qU2p%o>T}ߝŭ) õ3=W3K)eyI,.8m=9l26yQj]?q\P̾MY_MC_jK-A|<\9ώJkn:g, dP&3PS|^ Zm{?|?`k3* L>SK1P㲧X27n+ҩqaڑQ܊rN} I}t 8sFACߺAAHe'%4bHԉ+=GOx `Vb Fk\xb$6v_I4'JϓYt֬ɽ]X:j|GN{Vp] YohMJh^s=?ٹ6ޞ㏹&TO@:3(U $9ͽ"kp)JɤQHХϽ6eBZ}HQ9It~zEN~^ q3Cwo;Od:|U- A?ipj:w'$zBB vn!˯w |wYWBCo}Z(:\AεJWr_r[Rp/d1I*4)Wa:yI ]_iw}PKXQ*On!!׆x7^jΓ*U&*LTV7Q!8] Au 3w =ۨ=W}R@YJe (خ[ Rq"m?W[mF AЈdVH! R`1A c=Bư9Ι9n44{Qdi}/ Նy_^~B-ͥ[m6]#H<+7V1@ j>zA̵Z㦇rQtUU4VBT`j W)꽠%ECSaDi\J%'}QQuPqJ"UjCP]YPY ^s" A*es*=%˭W:.+jbEbϰQ,(՟}b+?8]˩ŸcP+B hH!_goO@9)0 vX=1ibX,.׺j(//%-l\ҶҟBao|Bg1'B|^Xk +^`r93,'JY"פlcAFx *,ID(Jc.!*ۘZK̥ JyΕեM<)a-Mˡ@F 奵vAM`ţkj^cKJP 5%iD!pHbQޗwbm(47.%R.5Ns1&^g;2ƌ:zNy3 S 4p.E-,C '5T_;\Q~3e,WG@7# h4F9UL.v #}ܦtѪMzP?R4Y׾ Pa6c~jBH?k~aȊd z!wj]Ao5D O+2 ;w r o3"|UQ Vz1yw)MݠM) 4dӳ_p}?|U @T~LmVI7g'r4qvOg6rד}7sc/?';0jm9|}=xO5*O0CxZ[t}wQoyy7?y =F;Y=IBINmt\OnVeVQv'yd}u$pL p$ץշ!FYg ѿd%M9O[K+t-x+-߿/Yz,cZ.%783u.3^T\yY5 V -fgJ۶_CmǴ/񇼉4mk]}xXFPꬅۿBT9PJ<.J_ & [RDž͡})#P ہm G/Jzj߷[mOTm⸭h_ b.3J*\.J*(hSˬ@@@x$r"/u"30y('J[po; r9~rٔ/旷*I%{Wt]jUIcZ >$uGlya?|WPYΣuәkndw&^jJVI4cboL@QFз-I8wB1meR.{rA&`YR3B3ߝyjmB9%TV7' %aaZ* +.!ֱwdg%&#K)#58=EʁnN?b9ݺloׄvUIJng-BkpvݥuMv H &1+,\PO8#\ܓ>ndykpLI,?#5DB+Xb5쇣=Ox>jߧ{QwWy^~|+}}_ƪWU!e3"Ģ/FCFGák2sf% zr)Yɜ· <R ՠZU/\ٕ4;xu%%@H5A1Ksܱq9@wnc7=0yړ*j=~e5f^(ֈ $d$bNLvSv&4JTf}[F8,GsCqD:ZJF,':N*9@.:bl/ A*l!RFΖF$5褳ٷ&p ªblm]fO-l3ޛvEDw't(y݁?܊8a0?Ͻr2A㍻Q&>~_dՄ vkn{ɞOAWYiL/_)Uq ZG\vȚ S[-#gQ#1\0-"\j@` 0qSpټ^o(QF9XPJ>2 GϖFf,<^YϴhrOaP~H^?/nl.ÏJ*留T&lxjc8o~~d+yy^dߗ .ওQ5]T(`;#Ч@4F$=,iMZ($$W<.KQ|~"d[HBENtnpoF?N%' ydzf#-6#zC5۔4eɦ~^ڄq ك1   Ȓ:D2uruv8ki\\Zh4h7_'4"$㧄vU[#6WmD`E#y|kiEXƍI T ڶw*.8gˀF AU˿%`us8z5xЎH#bN(MNEӼ'ݣ&W9'/TzYΞЀ-̤-+G2DcadC4̐0bcרolxDZ}Flu8 t^0]r S!̶49C5v9=w3GdziU[ N?>U>b 9] NeSkNwIPkiVq^"2CDAvyihi8K`ǒY hs*MsȜT@2Ԥ@q ( ȦN?%&@uN} JOc|6.fCo=.Hkgq,Fqn< zIn֓2 LV~"6_Bعh.5e3PͱakcP#.**LUSˌdžw|T:##uzjHӣSlN.S3}7#iٹxvBFBZ6!-9ֵlAҴ ɓM5D4f_)&R?qx᠀4>=s ɫth+F Hʃ}=A!@젛.Oli6]ځNTp81#@d{_+%6)6N/!eW]Gq@7[j_?<;A~ܳ,E#Z"V,^N1mbQC-G!2 FzMu;0X$G x4 ^hw[#wjz +N|WaAu 4UI3/dL ޾yռj C@SS2x./e3/+ml-H3]Dz-}1=( a@}GK_h?ϿE haKg 2'c%}򟕦3z?QNU n)lBE*%ؘ!lKeϧj>;5KzӳUt:y;oK+CөCX1ZoBioJj4'm^lӁH6߮Wy\(?zxE@ƙ#׌l6jg,{:r($(FГފ[]zBjpA2Yz'}?u;cc~ZR*P%HꋔIWΤR5P7SGA*A3g'@FeԼ(͖(wS5  1]CQqRBgN~=íj*틭h}]m=im,Lfv d9Y@^|v)]R fik!i!Jnio_32;Ld2S nܔ}Nc9DV&玘ps9I2'Fsĸ/ܮ6Z Xbߕ U#$wz!D34vɰuϤ^ @I( 8&b<O2 A]5r8pLMa@P9ջOhevND:;D9 3e^\HZ/~o7=@7./LGOWyL,Uc*PugDlfa-b[r*4H+],< V׫d6{$!~Oqdw4rL(##2ѫzPqe@x;V3u3H S@ _~ŗ>WU}Jo~zݱ*V!!Вr3,@ *g k'r蜡R.jVYBi@_̘PI̖2 939(6jL5fuYl-[H[)zVdB5GRNtO7rJ^^ ߦ+8U;FU*kaQK(pTBOJL-2Q8s3jCwIz 0MD)% 0; Taqzh 6)Q wpjD)(j¾?O/ ׮PQ>;b(acbwʕ&$l4IrI<4FlxDASe6x| :s1[} MKvb#"ag}iÄ6") n^~ Hc8tZ^AF4kIt1Mɕvj$#dwc^.g~ӊJX4 q[(7239O"_tU6p7qB / u :yw0QzEh:ϚC+[/u6cW}wI!iiOR>~NT PK*UX硶]uЎVvW8靅0apw,um9u 2OC,dK`B{ WU⥚)({_/Omuqf/M~4vu XzFWhk&MQ]U/͛w޽H6f\~Q5~ؽHt˷;}燛Ln~o~g|P& (A&&셣ooX];N'J2_LfUZw'tf2];I=`h']Rɯ7OC;5Yџ,EfE(}0\ "=ZάUU"Dfl%Kdͧ'2aS2.P/|;geVP/O&7W᧰QGiNɡ댺)ʬ2Xe+CvcqgJK%(}G4cfb<$Wgқ0g~Gi0;oT1v 竟_JT?Nz*%HwKvQwo_^xJuJ_~TmnQ{K1yOoZ8ݸ" "PzAҮ~H/;2T%vgoC,\>* Wk:rw}xɝ;&`9H]辋[Ѡ8_N_{`ML9<=9wū}5:/'y_ռ/ONtӉ)V_ߜ<n2|˴&d_WjaŪ-xrVTnG0ti2޷`<8 MgT}%i)w:mI q.w Mڔ>m?~Ə_1\ .2nc>8;<.>N&DQVϻ!S> ڗggo~xͻ|7'H'^K]:ϳ24|nzݧ1g}07S~ڎ?HRЩ6ܳ`n.Jv'WT 0JU?E*T2`1rix#B$2UƳ-[?Ú Bh߿3\Ib=Grk\u硷iuM}XSa!L;,S&kY kcw TzW/P/fXv\e2.A_E A +'ŶOme5έHXVEMepm4ֲrypM̛6BƼkZjs6"b/H"/?3$T6$4%Waު66Ǫ ?ݶݎ=&5kՃ5/~عoS8m}x!:F v/erb}p(zAН;G ,l dsEAjGLKŒ91`@II@q%U#42V墠8fR /5?'S+VY'#Cfj1&c*o95MѬ9=f/ #K!6|{%b/hݚx ͦwSΡ\s5_ [rNLk1< y͛B.tgO`fg%{U%;P^);^aEfbyX!duì0}ѷ4ZvyLY.CUݩUC0XsC ZSޤ:$Z΂w^ TzQeNlbf%cnbf5^9q+u)90>U{6;Wsme "LxwĞ#NRu\ѷ;&SmgM M#MM-WMLatEވ>"QHzKT1 JG+Sx"3G!hKA;{ZJQc :%~ny_wSD;#|bc7SLۏl]XwFY1B[;G z;LfhZ IԊaީHXdS4 pi(6!tD*eԔ5{== $2h"ޜ{vi =o{n?  s|,e~3I}|jYS);f/ҥGRJK9},tŖCzK )'?fQʢSS*,A`J"auƘ)\lu1Ғ{5Cs{cE +n$Ҥ\~0 S}ŰD(wİh,X)4ЊXnyPa*gLLJ'j)r,Fx4ʠ,FF8"< j0ԇg ç掟W-leaufPcԂJ=^mb!wAzCCa,sA09X$86m@8IHpZ[e _֡tjFy*VIduI*RURaᑷ)IF%Glգ O_qe\+.fðҮ ;ުVa6&Ui* Mc};rqFmv|b[GU׽~OSVAE\jN*|0) ԄaW#,ׄ/uTQN#S*fVjfr{c͔j9[^n9=KV#-?|ޑMCW\JV~{$g[:<Jm$jIɖ%4ސA i0Z",ZN$NQ~Z#xȫX?z?̈́-,vOy?vzcEg<8GCXmaaomLp {.L9~nvR2;GD<,-i. .'f"&k8КRT]a5\ -2č~~ sg4[9}Tl5_NUrx)O}Y`L j=tK ʡo/'R&ڳj/hP{>GgTΔċmE:~f RS$u4pGaonFܻ~GMԪ=%=^Yg+v)WvWj w'2 /f'U!)U=V&BW\14 t ;Z"8'{ްƙE0:B;Յ˛+xA =\e`-I*W0T L,>QA1`)vr8P#BH91uJ*E:hq~niq/Q"#X"G^;$V Xͅ(zD8H0#4D<\>Vt Q+!Z( "xtYǬ8*LHMyFēKH U8i౼,W07;-2Jq(õ(<9K3|+Ma%w=5g$$,Q(fd$B/M(8i4"m(RX,LCs*&)F+nҰ 94p0 )uYOTc"C1~y=Qc8O-RaUFbVlogϟ`8MeIKU:^si&lr 5O.VAjNճ1UPδVH=)7f^gw:rE Qf0>:s:[ _3/H#a/uЖ;˴93i b, >BrPa\6`<..m0b(¶brH{oE{+[Q{{W Q"1X $3DBX/n.B(T#$]z5kAc.ֺ.ᶖ ML%CP.OE0v(D`0.0AT!F#QF0@hBhTbJ Ũ @dcgE\ !P!!`PqR &Ω s aZ`Plqȹ)"kL:Џh(*0,W.j;0g5`3 W˪f?*9wyQ,DBnsQYg^|XFtưfT(j^k@^p RJqn &耰RKqT0dS<{RF'b@A aLhH(LK1h1hΖ s8?p,Oucx7I ev48+nO>&\ty_6AB,<[-*,Ǭ@>^{| *gB[ 1@b#(kM+ vo2֕J.9#䃤j1LHz^5aaylzgÊ(!t[bR( ,b? Q!!pYP$gJV.)+,9 #KQgׅ22T}{Y!tP.HD/~׃,\,nznʫTӜ~yx/S_ubN͆A@҇Y']!Ws.]< wюUꋲ=%)L9]â^c)G'!hfщyR2MRlh})oÌCj/&^UsґiE~x}$4!̩.Q:YBYVRQVMé&J?WԞ-0!= G$%amsW?m9E;;X.l9TG;w5Fz0&-=KRM3[QHkt(arQVsDk+f|oKJހ=Crw.9h3'H3ܧ} i%T*"(ʕs# GԢ.b "sNRC*g3̹f,&Hjf']I'Ũ'n]7?nt/@TЮO:8t19PKb2ʪ'9r(1r(,J跍dQpL8\8)!P፧)%U 87j]ԩ{ׇ==O.0ݷF{./ԢfͺH1w{peu>V F RY1=jIږD¡iEnR/ցX^$H~0zS<+CI)-wԅAly?XQ ;ɱF0cIB Lƒ Nb`B[a'(:PyKD\`u[_C#*c_Xzy9k <.q{.Gg?_ݾ\S-⓽~+$`Z( _2}>8K_LoAl #v qZf/!/ƱLr8c.|=Y*P2ii8&K wgw<0!tG?(&2#ѝɞ!JQ{n7pkح 13T#.nӼ{. (T_ɾ|v Տa>DZ%0<~A$gBl/̚&'}MJ[y)L 뵫nȦLM\ƈuѯ^:U#FOlGcT>;<1 Ȍꮝ~#7 Ƹ_l{A?+?ϰtduMXue],޺.ubrjX p?|:ZkG|Sɨ#Vї~HГ`sVi t}US|?=6IZ8v p?ux;L&)e'Eh5òdǫ!Wm?oU%G &p ZEYMݏS/P3>o!q:Uo`>}c,*6om\s䣦)@ũ}rH8yJդ-!SKУ*^I'};)s0gO=|T{$ U9 bd&'?;(1:ÖDN7M.`6/R~unze ϊl7{|XwTn_^ajmD"-'٨X잧erq])wYrU;ǠT^@-tT7p>hztK@I4%u}P'+A "J :Be!5,( !vy6D *H!pmoWȫpRA4}5\׿=<>cCl4\/=]νufOX}I#!#=Ͽ4`y50.m'x_8!HWJ@YVRPCK-jJK!q*hR온'MN49w ~h΢\` +"q+#Q=A,.FՎZT^5 WH8AYB 레Mz)=Zi(UN+tx2JA UeiN{\yLBKҹEGoˤF=\?X)p5f*2MiܡwgU)>W޳mD#Ʈ-3*uBkP]JE}^ 5B G7*tsWD]HҲHǺr%|K"^IQ#.44vT Tz*ێuFkC^GR1 ᶇyd{bD[ SnPTM|IN+i%|_{ ^$1jȍOu$!11ps礸ag'&zmnja.{No{ZELlΣ KsJ&s2⧈OT6Ͽ\rm-ӓ.nN?us[Գ :zbpE<"pjz~6%-^_ i"ULrՃ+^Z5(r ߨqjSmʩ`t6۲H+N>+y Y˺G{ӽAK938y@^ҽUƄMW.nf4ڸtEFIJIt /rfA\e&a zG:ۺQ/-N_TSj fQ4eq{{Zb0t3iG՗bPS&Vp´VvJY֭Trio[!-іU"5[VcV 6\P\ 5 G׭,b^oe?XٲG*i첎ZGRF1b,,I`!T0Sܕ*(qIO;mq.dCz8g ~miaՃk^[hT0*h㩤h ^K4ZhHj?v9CW2Θ9ˮp̵kB*R^)!v8R}MlQ砤<@O7' de5C&cs%97=(k8pSVMP|{JUR^ʜ<.aJ寷W0 ܥ (p Ps&J. [FbJY/Kg%8 zKz-e)%3Ri^M:N-l7m Hgwm}+&𷓇үє$3~5nQSh"Z\Ўhu@J˛pXXU?ͮ>P0(BJu`eJ]T*}WE;7*=(kN)b8kQ)>"'V ZMKQux,+$/*]w"SJqHōLqH:Y V=\^Jz>Cg(c),2{?7/ qo"glH^MB&\gO̚n[7"QgbNʗ'7wX4gr÷Kd!ERWE1c}mb0tWPB:Ʉ-7W12Ɵ|صt.8]Yo#G+_f3%}1kgeʢGdR=߬"E,*Fm#"22B$'L pbŝto'F;^1-w シɁ!lciq4Х$Ihvݷ\(wFV{)EJVC 6o|8v=[xT)NDF1|ɾDpژ!W䔢wz;I2S]p?bLz iJyWn}JUuV3uMsBhˆ?k1yBWe÷ uI g:XSzsG8!qpkmZF&Ԟ7]hB(M)QHMs:h& z蝕 W8ؐ)4ӂ" Z4*RqZ#|3ͺ)<.]ލ 80Ŭ{pT3罫< 'WJ] }6UKgK>8[>{zwN8v|BhoϣS*L@.a‰R% $[ۂ\>v." DнyR /JIu5.j6zGHY`G#F=+6I(Q3=EL TRmFwDR-rdNT$' &%+ޚixDNު-:[ୖԷm–BA7S߂Z xNiCӒTZ T=yK/Bu'ml2t0THZXUU+ï##G<)@@>nTÉYG%?[e(RĻyF vb G-x>8L6vΐQZqI~HdUzv&3A-?93Ǯvd.M_f^.cp9oJäǥ*7TJv)'\PIjd׶RdJh\,Ak< _va[MrVh^Z?_WhoxΤ-/~LL}Xn& ?,%sΖ%~`'8-VivQ4Co6ƛC@O熳0l3=g֟sWyQTTy`PyC,c0h4Lo|/`*7~8P9쪞݈7N+'#o*H}+ ~w>6 圳Q/;g(PjQ\&6hcؖFc@~oBtQ~n*/-e+ǧ͚yuugh=ojV/n~gm//;rзO#B![g]$:Z7CC J['p\R/\aƻ4yXX|uꊍ=1Vm=zdhB_9Wߜ8%}?]z~0 4 W5L819(8TAѮ+^\iÎlQhk"}[m8N݇o%f~?ÿ^E%e9Ӌ|? LGb)){ 3LŽ?X*kl@z)% Э ߶euޏ)J5GZxӠ8G%btv)(!1VWCMף煓|ѴCÅ~|?D F4yOV W] s60g`myPD#ĬUw*kɨBu|%W12V%d:++挓;ЉbQeYӑ<;&s KRmuq^Q RX)Wiwz.%.GY\.#$G 7+da-;K+D Ca{AH"y%W ҋ rqb s%1T+BKθ-%D8&V `yV(d`hK&I@EMy^f`fWB~rà#Y Q' T,rK60V`b9ư#GAtfP|!%f9cz*m98x|k9"ce 䊃dݩuz{2u -:@5_m R=ߞ堗+O j:yy gF0Hq2n<2 (y:6c pj& *\>rR %peYط/x4 ,Hq124qp)M8KtJT6 VS)i/W L10ĥQ$FkRRH#hkTЧAMO'v2+Qp R)Y-3_Еd>MfO$p0Sɓgv'Ǒ $1B[r!}2TR fd}] *'&+۰kp,=Swt{'ON??Ecm;E_D7}L41TjA)RkџWL$WVpM1V;< :3)G E0,\6eX}ʻvHSJb1d%NI /DPi 01cD,AQ$#E v&o\} K ;2J/Ivsw3'ŝ~Y֓-X1o(qv*7Ym?nZ+7fT_^iPl 3՗w併jy$ȓ&wGaBN༽lm<T>Opl5`#$QrEEZ,G O y޶u|~ix ^Abzjl8Pm݃v ! qEǸ\V ҸX 0w{Oּ#Dp|f8nM%O [PB su*JDw+ Ī%h)C/0@`T"i;nFQ*e^`¡`LH&j_kI\]\#.X" F+ ~/6SPk hZ \H {mȽ>.>8(diħJ*u`y@X}:&5 0m8Ml-$Z6YdQHi":j}΄>ɓe>@8 s?=pO3XYIx:wǗW\\,.&+4ѝ=\I0ZD"%rgwA:zL玗OOya'BW\:s?뇧8< Al aۃ?c %E P닞ڛq/_%JzBNr ÎYk;nC=ھ}]Z\%( Sz[iơ5nq )[:ԋy8#޼1g\.9߈RcHF)Rd` 9@{ tf /",zQ;8]!=dW(J6 ZT;h"q{ ǬӥO㖬 =B:n_& N:as/ڞg`e6޲yZPԼ:Iꝵ\& 0,e1I*$ Ӯ <mlٞڹzfWrg.ape'z/Xr8p/r3j׿|ʩ-2e&hvpk>Qn Qhaɢ";¹ 踠^Dbt0`! ՂN2uQ歈^lJFmuFPSw@"GX_ I9Z@R,F1:l!w(d*`J` wm/AyEz JOvo:cPb:q~]u ]u5Oウ TEhXIC@@ tlLgni.Kд4 ͈f'9kSE:8ЭUuwFt*?E%5 l<5]+rLrŴE"l_]S B4os;(gŝg-2% J6 x2-~Gz+^Xi$LK .9r83HBKJ uwޑBiPtŞ929Xșg3 Hb]ah 7)oyb~CFFX$$59 eT=^L>1C]tjuy#2 {?@qW<0W5žUeg=TBJn:%?x" t8VM\GGxmBЅ+0–%TDPT:w)0& (8 o@=z >rGWhi<9t.1I_&I3K*;ɴ)vPqG !֛N! >|G A2Y`?TL2R,EK_F),WD9a?GdnZ!&;{6GFp~3ف{~lQ%Ю9Peg]D<%rMJnpq[eG4Tﷱ]^PQySSp4o7a?֩su u;> V#3˝^FƝ=8A{t=h<'].&nk nn$s$2q%.bP4F9dDK dPАߴҾ2FCImIFْݽ(}wut5y9UmSQArDQɑT1߀)*uy0O lz|%eBFWǽt:_Ծ?EZpqkӫqvN:i뤝i9?.z?G-uØ^T&5]O~ŷA<0"UXf}.5Ji}TXǭȅ.Q72npF3Fhc(^mzh'i=ݨcpEڏ͗1"9)h&_eL}t%w fMoXYrl i5_T*5uC'p7_?Ex(;pצR=U5[o}DzunEy5RCr怞ǴLyJPX3Z zd|T)_\!PЉ'tJzMS/oq,ՂhcgmiZZ(Zʲ& _hM9I]<$}M2ӵM~;FF5446I5DP.r›\\9]fXxxZ8p2.H`*v>svvTp/5T,@]ٯӾ-a t) 2E*OߖHqq/x k4Kr ߖTcݶ;GpjrGuۗ1ȣmݬ@v>d{XQ;G7 _n7r.P%BQ.˘i,*|˙j'ߺ9g@Fqr^Ӱr3)F%-;{ǡO5Ҽ k]fMFPjϾe=\Ax2m<*>HР_̭Yw| Q eaŢoWmuY "8g.V.z~!pfӫKJ>Sp`nl'g F=_g*P! L#Ta@LSH7w ϳj@W i`z?qӶKz\1;LS?o5Z]}'}7Z' vw>>{ G*UXJbQS Rо#p}dwO7 8{mYAىR~6ٞpAigG&[ zل#].﷯aUHJ6w߀v`۶ɉgʙz Qw@D1T]HaK'L}iZxZʜ?ajw$F74WY3 5+;gkO7C+Ld8UWx F-ۆ#Q#EpJ>Щm z|߭!Z[cwkT$!oZ6iõ9%U޼hp8kw|sa.?IWuGp$2i(2哪 U Tq`>F0Z):l_ IKV ˺sv\i鳭6HQjO\Yr#w-"scު`Jci2ؠ5"8J}u==0 SsPsKp͔7&:JgexJYbr6A9CN-S{C9u:Jli-I+y 3!DRGܹ-4Q,ĝ)|9P|DPșc2?C ȃ̇X;8-JBgEOIAOJ4Z5!: L L;|GKqy)jJ5҂kF!fz'zR'-Jtp16L~fhG>|8+vX r2=Ύ2MD ȜM7{m~~| @#pe eKB(#+8E( }DO\V-_p $/cD۸T"kS^xůqHNӠnЩ5SCA9qkƲH44xqK(Lʠq'^SjKKK*fBii0 SਈlQD!T[t %R e5|ڒmE4ԈƝN]dO 橡[cЙpmc.piq%"XJԥ &yǜ Lphgv @Yth ⦜LT0*tYy2G1y(L+eMXđWO|yrQW ~|Hd`߳{t1G_wǕIX//pOi>F7.SZ,Z~gh?&;X?G#|>6Lv(Cu\}_7cP-ūWnŔ.*dS)EV6Hml#[7nIfpH >6K=~V3jzڢs6Wd񣮡pi, -Ѐq!XYjtN&fS^>%nQ*%rkvV9N$ 6=7)O糾NCƜB R2NCCޮ5-䩞# oW ec93wmmz }j>/k 8 INaF"$%wռH#r\88q$ktUwUk`U=y:T0kП sCiKȊqe?vƝ"꺵϶_d[|١^lR􌌒F(_ЦXlM)=jBE%jdMr/Ai~&&kfJE$OR*^G)}$(G£8V4)cF!)z^)##S *$aQbP!a+I,ѱ%)OdGd%h0َO#D3b_7&KA/i6x'M$VT* SkkPB8.RIF)mh"%B \-53,**,p42%+TʌϒfJK\0G!FAt)w=DZ ڬU=-uZ'~n~AًFn_&ytϿCzs^ Eъ ۂT/w&%aˉ?{dΆv0.D?Ǜl?|}</?,/0?/>Yy} &+w~xT+3мiYG_V)e ?P\%O4+hѺu: )dYx/,W{wɡ[yvι<"梢>qq;*dbΣ0Y(aBjmWnܴ 68bi q6CZDp_ŏzqVΎ=anЩuֻtRTϝє<9Z}+^o¤(OQşD:y&$o܄䍛qMXk$4SOu2Pp RA*r%!@%1Ai _|ZMN'73ԝziq$;DM?_̆ns03~ړ,We֧.ZF&kjgNަAbeIrƀ 7&I'TrHxV%09% LcU\ GaSo+54sLYR}Ze3#DR]LBBK]֙.%u4h83xDYgV1J*8h) H:w_ݵ|;> VNɄr/t*0S9řRqRMu >t4&Қ8,*eM­&:IXMFm &1JsC5,Hk(H9tCG!ܝ:Zk\'U$2&.NR (;G(N/$:ԱQ CQESOiqLz^<>y7.'E/qwg]j%'#e΂K. OL Y@FHB2hnv!UD iu{<3$uX3j ^:5R3TlGׂjB(#ZB7gP!܈Lc2X Lk](xH:c x˂'/c["s&1cxH%6D11l ,蕄@p7MUê4Zm %TjnPثۜ19B.yuxCW,l$>-]ߠL'C |}$,^-%##з>砉<գa%_%)&}G޴e/Q b\K{s[k)뾎UClXݬ/~IL>#/U0@(ei}` Br%ʱnGIkVZ$W4%eI La V^)UG%KZkyVIPZ ʛKhKKӪ񜒺"Asz1h&]DXFRr#C)߸x)KiɯN:i68j\8I-h"`"SDsK2)3U.cY6QҾ҇"*!-сK\r9+R댇YY N i*+JJ} 93"W`43rK*) c%ym(F!dRTz?K;nӥ x|^J9 轎hM]u;hvE`#oY4ACD3q ;r| MoS/qJ!M{,L7Š -jRh[Q=+y6iknT4鲕:7%nmmib*P\*p^׽ #Uf@.DUju$° ޹xS9cl3uT a5]cw(h_=P54\ImR#J 7kI/Ok`R*J# ym[{Ji̷hInp7R!\jU8 z6Z-mO'6{$Gs8͔kT2 e}MeFjHJ*t1AG[] q HoZ "u/(V'l I!2F o C)t/\e"aQ4MU/>U-FBMD#d ^ۼ0_ɿc3E4?pѡ{~Jş!׋?';% O1,ؾ5`pg[39c*c7~4HC|*v7UHd?T2cn^Mm7%Yta-zVmS#H#xJўjP]o9, )׆LX-6Ea/_JO/+0M0#MJ@e1k60`@!\H(bTC=o.BڦTl}8ٔhUWȢE wH\ ª2ҭzh!Sy 7UJ!5GP kO7ω-U 9 LT!H0>)(hlE|'صG |?=/tw͊%v)&Ed n>rrF<PKL\3Э^b VK Rʾ<#0Ap 1J3!{BOTﻂĄ/VLƃ-; #?/߶\sV6$EJSuTH__X|e8i>7L|= k"2忋U{CyƸUMd{.(i R>Ua?_)J!zmQo~^d~1@)k ˧7c\Q03$6~ZnNL7pA?/<ȨZ OV߀Ԭ#Ziƅg%u~H1ӑ'JTc!=c^Ng!휍yDtKƙNJ8.#+ PzWR<ݾ RRS.VD(qA936MK3ޥB3jdJq'0"Pa&{jԠd/bsmOhN85&6Z@F`޻q[pkJZ=,-/VwT> vwhh*tV8:v%ⰰYjgq%q ,\ / TKHgĈ L$8ZfeJm/lUfǷ2+*jTC݌z5nU%Wvs|aEZumlG.[ÙRc9!BypQ! sr)lѪ,]ȹl7Bz{[N->\Rc2RYwROh @k;-A9㊃⺃mOӝn>MU(*_kC&1Mb=Pr\rh%lǚ4)I{Ѯ$ 25.ǟWYTߋ|ǰ TwلvupNyM ⇳[4O?F}8/$Ÿphuc7™hnn*cI< h'R{!-4X4Zi>_?ɢ{٧D׃Oc Wl67_9QG䇋1hj+{&tn~w~g鱱{&{>A=~KBf3FQvf쯾8|ڴqN~ ۫_]Έ 7!#&dDd3m;\VdޢM o0[B9CM sF#rj kyJ$1 ItsZ@ޕ7+SoEڼ-9j3TW[ @[YHǕ RuIYޗT"Q 5с16 AxF1ѽ5(!.u %TY_˫\H͐8Ɉ)#lL2N(WB&5Ѷb:3fNۇezE e]B-%# #8ҍ鎯sْqBVC$ 1 y5َI毺6W cazA8ne^Zb=!NB%JDWhu^^dOe9SMf $ 1KX)3a*HID޲:zHY K1B{, Mr2+I"Ys޽3ݵ®uvkwڭϻS3{u' ۓy`k@# 2e,5(N p\SAj8Xr*T\(x]HsR$dpKrpKB@:KBoSXO\>W7 5P'WCqj 'Q͖+Yc̲4bP7Ƭ YsFɊeB9Z-hY"ƶv6u^8e SUStm $Q_߾SI4ŇC,HpT)A9`L$ J4\{;s`8BƒИE1qfn'H 6U1fcx ޳4$ $H1']`;%2v,L zF?T oɗg ZVL+&V({LʼnNׇb<E#¿ P)ƧيmTWo~>%ΝR`3Za\ ;D%&X]mT,>Yh<&ű ݞS(k1<q||xwfb+".beq5EČVAEP{t3*w \=oC';\ vK)K gB#Cʼn#lpG `=m.nEhMҍ=ظ"p8o_-^'HI,ā͊8qvB`¡O$aZwH\2KyiaXil ]p`\Kȇ86H0`&'#TpTNU_T)w tHc)2dʾ /C|_, ->S f(FA&0@*Vt`U_Ti|T}y`n~-Fi֢D CAE!$ YPҍRbn`:<;ޙj\,z, Yֳ(q}H2h$xʡ"rErAܨ5DyŁwX)bDbp0!CEX A8=s!k80ĵ`..qf1]9h(Z*/ A_CPvݤ.Q'8ip`E48:7=r~d;,Ya)݃>+},HNGY,*YqXg7N]qĹ;4DXk ȀX:LkJlLtMpːfHE4٩{+Ÿ]? ޵< OGq5:^F/~/|?ΰz |.<,~[!h~iBvHz;m?6m{ջ70Çw>;zn;T=t^MBWwM&tdz=^EwJBׇ紗V؎}޷[J 40x>.nIRv߰?0Wi mN۽:v8 Cw&ĝFEfj$ &dlZ&;Z d(X u_5Hj}|BSi4>%mhҠ9MLrWfRcCf܉i`l9:]M<*Q[tSCj;l<m齿?n4p=dH:JS}Ov^b7 aXD%V Oo :!K5!Ս"zm@/c܅۔﯇~t6n|/{?R^M1;Wz=#{o>c!x[8!t#7- ǭTTO+:?-2ZKߖF#"=d ƃg3;-r2\nsic5_I[e)qs9W.*ЇƲ<;8/¹꺑"bX5|bD; .r4boW!g*w.) ǥ͹&oM2 j1qu3.0sU]υ/ =kw.'\%8(G\66rrޙB#QqH[ٷ}SX弙OǕKY PDq5 ^=xߛ[^$A Imӿt` r!!AD})VŞ|Znw-oS31ywۻQ44S6뻤DT8^W=6sJ 6d+p;0#bi5bbOKBZpDIRH# `MQ.yK*033A$;'XH')@+-uR/$V:o( [\Hʉ+P-fU]RȜ<&ؘ8A TkB1,bcМ{+PH# v{&_c6},(T1ֻSMLOڔuT a!9ש7/Lli5I;s}ެR.+pCܘv#l 0V&8AI[$mTHz*gc_9A08HZPH 5܁Q8?uBR+A}d Dzf_ jj@mi5"6A?m#jI)9^`6! ~(eZ(pJ*:}Q4 rߏ`FBr 6Xr7}6 ~ܖN6.yWIT(\}q3$܀2̐U2HDRRX,㔑P& mITPGU_;Czr̕4- !U ՘da`aR*EW-%ei:Z*#[2cUx٦y,f)٣*FA$9e5H*֙h,+<| `.sf@҂Qv M(1C/c!`N4MAJDƒ 2(?ÚܶL:e70;8sj{Z 5_ z(&toq&jM^E\aۿz ؿyЀҦK7O?h^nVpڴC3#_{>+x^m.{.+-Po`E_`p4?˃v7YBӽ3r g!%3u쑅̈́`REߓλsK#Q%]K#I FpMMi(l!0e-0QE7T,_p<{1g'턘OzLW͵|fsLk/ƦWpӫ3CtGyo./0~#3uz7iSa SrEj`JK;߲DyE~5~ꚴ4&s^HEz!573)e<&NCcTUm!lx3a4sWX#y4hʟ}x|ؖ(Ӕ#tj׃ X4a  *4{F4ԗQ)mrX)50Xcـ4J֌jkxPXŗ@A\UA * ʬ'.50-\)#S3+b Il6k\ :/rO%UiVkRc*Na4 VmwMB"HZ#tiIu\p4EHcbC&i!'\)L2yɸ!*5%aX{UN ={9Q7&ٷ&m78^,(ȫYsvPfLYw>??>J:xŠPӻzq (vd3)zOֻ^X(HG7/km+IЗ PÈ lolg_iqFT_K䥭1˾էOQ43wW?eF%&u?0NOHa9,J,0C0^`^=y7Y%)pc}G8/gGU=nDjQ*t<2 =J*^$l҂QpE>F'h8e`2|JxDj7{6R  k.YA#c$H<#`3[ot fd1 5o쵺@^z j1T[NB.T ,X-2JCM[҃_(@{PHMBQw.82KB$ ^+J``xD`hUu&JY崁 h6pV~޳|-Ⱁ,UgJϱh)ea)RAk(A|)R>>XEi;E4y5w\gcsx׺ì]ޫ+ {\Aӹ)xQDmqTb&аD)Dxa2`u'(.,6F!&[n*-d׍ Ea5G$4[@#FT!F `DKt\w&Hz .9Ls.氖(L0_I c=ˆڧ4!BH?~Xux":!B6crÒ+"hTVs**`KaG;"xWʭۨR-Gm iO " sUj-sOMsU:q{|Ҵ*xg\ xOSBH1d9rVB$;uWN|me@$"uvЙnw_FWdڙEW kIpdՆ1",罏͉|P*If/} }M5&ZvZRjĭS{&uu&*xuƌqbGpЈ$|D)8³࢒XIPªp&NicQ*p*[*YYimh S-wFS4A[mc04*"8Fm`*RɬȥD20HT1\3x7ZatDrr;eWs-Rm*CKՒqB܍3mXX4:T"j6ǭ;VPz:P{eTp9lHo LedXKx \;(~Wm#+Z!%CL9C99ϥZfrеךqa xBxU_jFlH{vuP<ڻ푡gK7blyۑn!]Ә+~E+_^HENgy9Lb&uh3/tn:=ުu:Q@lF`q:Re2D? 'HV.S=\/ΒSX-f_~xZAJFgo9Bde Ӈe՜,KڗR2vm!C.LS`>BΩfpBE,73/ga6iS>d ?\[pT s7En6"yn7>`xy"٠P+acJwr;ـcYYWr>jo97 =mJ5g]){#S0w!΢1 PG(H7Q^϶Ӯʞ}I_u RݶxČ! aUHI0BIkòO>?)]J4i3w $bŝ kjݧ.o\:VWn_}ͩV_`ʐuz*B]!W~@30ʮFKT Hݞv{b%Y7v fyz;% dGg~&0/IGy*[ J UԈJbhncnJP$G$gEքp)gXX,)M$8P\9O%|i*FzB䵎q-V>qmADFXa"grBAiR:HN5 0@K5׎u TPUS#ƨ-5,s% V-jV)gv}5@wme.o̭ d:o4&qkW\}n1̖O^ބ5{Ҙ׏l6|K)80 *-h$ٰF'w!͉fPae>4NXs%^WD ؔ(z!muhw`luT+RqSk G) Ԓq;M;J ~7ZٞYH,j܋c{;s\s4H>4.3;s_tyyY(Z*Z>-O[PFR~/6Dy?ΦZ7g}}Ʃc'.JJDKʤ5E՗3"p*~SDi ed-F*[[_sp?Q(#Cs3CZa;0EY / }=1u=fzck$_*׮Iir 5EŰL;zB/;XUn*[[^MXG~kx%Ʀڰ.ebrn)H{)rcF\c7y>.Yr9#J2!p!9p)E8X1q[J;K/./k_2DW KLdlN,f3v[=njxM9̔DhW)aO;PM?  _OUAӏ:r}Tѐ\!$^.m-ÒZ _&n.7%\NPe,_mA~at;=4Tp"[5V 2m_KJh׽Amb!-=I](qN Qǝ6L25o4F`CS.bRxNJ貮._6&RC߻r_;eHHf1M>jr0&(IqocFag2>w^)3byK,M.}9%1苙e!H#oѨX h O?mC8?~^d/}Llߗ-gy76 }0G" ^21E1(a&/pjL]soGS0}ގ>UGiP ͯB_&7~~L V9S"xrC80fDR~ȬH7͔\E$JQ٩rS-9)<%c$\*! v32,l N)Ҋ(bl"RS+kA?+=0+x>e_D8f1wgLg%??\_rqqgO;f6v2?=I}k`X{tsCm>k$ Fp(- 록:h[k;[|9|E8yh'M,~dART`Ufc_'ww>\4@rݢbm|́쥂A{$);ym6!cOR%[ΑFPᙧLQD,[RW ; 8fEc`/m`ٖ{'RS7(EM@Z|YmqTnTi"FGah 8XOLTIŽ/ H!* Z0 lp0au 029zDE &;lѽTw~քzs7ۛrْGܿɱVWG9*=ϣFre||˓!~Àֈ{2ۍd*q/ v A.׀mo-YXxǐ! MmYW UcL{1V!(70<%/:L@pOxbϕ%gZ{ƙ_ aq"Jx}owS}q- b:vg(9|-  LwX>?e8\c'/C OaвGa*hdQUPۍ) c,D7 gTS]SpG>~$,efk ;h|JR;}_8?݌f ZcHG.>:ɽq7y,ynvb gG!Xˣ?*Rȃt(M>m‘fNWF.@ř^:WPޝ T<*k}Vm):c Èu{_6f} H}VJ g]K`M1(#3J:S{k[~)u-Ul*Z0WT(aj MC+At7~?Vj߼kE;S*k] cڕڳ_뒜OYm޵unW@nssz2wx¶Z᝗,%d u'smΝ%ЩͦM„P*`'K*2H$H2+kgqVɍ1 fX ¹A..Q{~B"amK$i@i\7Jvi`Y cz9 |ӻBGPwT*A">!뱜ꩅ9 ha:r8`3k\ f zcGСF*u$l(12* $C'?M>Eyޓ-jTy.ڦԾ|KPjn,\5L{eI]V XrluZAoVaX:s|9ŢFa٪WB{&%(y"i pV1VfKh9MFCf&K~z0\X}=zN9-(D7kJQ0D8r1)f KcfHm] pSڈ_8&1˘Y-J$"{lh]0ޑ&1!&qBTK73"3)( }DV 4KbiJRuV*UK%}2~%5 Qbb r7n#`T'w7ɪ¸c㑓2\Sf #K¤R"gH)eN`*+ q7TLϖC': :`X}`)emUF 3j#\IȈ)m#SayĂg+)W螼^NeWe'z6TΈjLֈ-!U_I۪#KA ~ HQ\X̍_Fיx y;rc]C}B}(?&F/^;G$nj?ͧ]i:B=>0kw?ӿnͧ,vs /7GS}Sd}yio'?iTS 9e4f3>f=9y1c0ξ[psBYe?{::?/b:pnzzgӻ&'˳ n[/|r|xW?O~gY!g'xP=EO^Wqۋ-|c=&=f={qܓ'd xO^ -|=Tĺ`?h^yW?}qٜP~}߼~翼qO7;8:| }˫WO~W^ns,O?g^M_:0م=yu~.~^9xIOLYg\]8;rQ{O: Ϡ7|q ʁOwnAG^£@D5nnџ}e{Ty p6]C­dAcws=9\Yн~_m {0?^3X=tHxŢr.z1̇K/ 7 @{'bpoI&A~%mO=U`16 ZO0BΞA/|ɯ;I\k8<~{yoOoa~ _F3/~\L oy>f}>h4ݎHڌIi"8ڥ9O'Ѥxf>8W̿?Tf`יw9c|׹xY0wQTPɢO~IX>/UHPbQ$Il#&p)d5)[[lx&-d|\M~7xKn;g4q2'ڜL?c6F'oO3,{>ޞ ׳ea=]笼a7#%e\Fa^#"(B&B:b$iRbpASC,3ʸTkGRIL1;j¸,Q(%gQh;6€hevJC`{HJP)n vGF֭ddDR΅hs4oL81۷llIc\\ڏ@!(8^C␏(&:& F]4u+w&A(bKa3(8lK!6{Z-H5 Bص'Ӳ-nmQ U8"NMOףnv05mC@CW%ƆB- m.[BewVFm^w.wU[D#|[! P\ہ>GDBQXJkaxZV 1)0?Ʃ, @1N;C'bS q QAaףJA+1^ |Ỉ#AwSݠbX]RIbu Ӑkܡh@M!Zu@bF#=rg%. llveXv{c3N-N,ЇG%O3^[Q:ƭhxTkz Pp+H6SXH1O^z %dGkp(s˗o/IepL_޺u帙bJpT Ӭmڮuk<|P?Z8F$UUc: <( {dǓ$8r8)~iҾr'`R>0kO;"1Ҍk]fG%S{ƽ#N%8D0ײK%=5UK+8vT@ _'XӘV ku,ı$&HSU:sz (*`W*󳎑 a~D q&A k&aY qRp EDtujq% dKba4u4.̍B+fY*Mb38 6x*21GQLNjkҘ`ٓA(qf}@TQyκ y=} ~IYsHk nVo43F*YDn>"K2RF?3r;ңrãqfX M?q]rf)I){6)g]Yt˻쏿C47Q+z2SaN!-jY}cd{-f\'nyq7N;I x 4R&[qNu Gu(| 9 Aghs:DRЉIr AGhqk)JUga3 D<ԯ(p0sh"2-bFhֆ L5_6h-@L0#cD‘QtIxI+$'<2>|v`rR:CnqEy0OǨ4ڈ*(F%.4ů0qhye)>õZYt@(Wt!0}*,_ǬRb&"8oażOӋ̈ieqhtp0+ 'r5,_tcq|B??6fâӽp?g۝la$Z(NPifZU؁Hp@Ӓ zy'PZR:2ae^< b.Un`rnrdu &hꑯSmG>Ȱ牧0&M+JmBޥR%8&C`Cz]N5{TGƙg[*e|F($a70„Qu:9p4TN_2ʪQWKǛrDW2}|T[Xi{1g_$A%7!7Ftз;]ZD+>^5zâ>e{)PlwAbgr2L7grj%wxTxӋdN"ٮNjbxeξn3Ovh{\|ڛ)8?yuB2)2 \B ))tEK[h;c9M| Ԧ(OvclL8}8dp5UhV`T*Y0 Iv+Q c⯪BʈyUeuPFQGiuTdh)6HLn(vT[^Ђ݌C5 ~"LQ킣Q̴"'ECY9s#h" vK1+;j$aJdycq0I3]F-ZKnnG[C0zM焖6QKY\5وI@ oo Np8p$zi>oq(˚p mA.5W8SV-S}$Jia,~+rSAU[f}}\Y ӥJN敕wy@u H7/W޻\68A< ^n{#̊2/VȳRU;LPC4RZH8Ge;&)aV+8#1x=1 nn0FۻjfCԵͺuV?80iZVv`;^Zg\S$5DG s.y?Lg: B FPH8U P@Ch1;@3":  >]S Aw( ­=h0bIEϟɃo&xym ;&rVPJrm'k]ǃ%ǣN|+PN` cޢ5[T'L߶xيD)x?ޭ_:XCyHnp +;wL ;K^AB=ܣUÖW0J=ُqz6hޖb-L8PMp{x|"?LJgZ:Ve}F Jrc).;NH< D-e_'$d_=#QGfUG70vy) *w\=R7։-nEbBQ8=nM{&Kf $M KE A\*\rp,."Uܶb*Amq5U[sJ*_"x5U}tbLWe*ö)^)dUrB&!QZ~ZIOOIc OI6|]،*| 9 K$\цwoR1HIwR=} (zO.lFHzj:3s51NoJmT#h^_Y6*@pRsUj и.o};cSzq&v,?7R$̇2^'pzUwpO1@uqN6m &Ml{KMϹi-p^7CT:D-jIø$3 f>V}OkOOJ %ӓC !ǶjK,yG)QB㩧|m{(^W+*C=9$ы]Yù:81) (]-5mϔWM_0e37nCEu{g\GyjIP]n"4H+]IN 4Wd֕Kw7bhF,>C5ܤ)b"Ѫ>BphE`J%%hח;ū/8Upxw+XEЂWa×~q~yozJoJ+ &\+ Fh8e_,D{x)ǝ˷(qi DX/8L(DIM )jI̻= J|Eo~1@:vO{i0LlnkCA8 O`HEP-5PT:heY7Ny"A˽6ؠ AJ#5ɺ@`n%kp0㣐`pǍK=,OGÃ`}9 8%}LD x{6ft+W=JA`ϛA@x~UvڃjѰ0JFUqZ߶&ԖꤜbzzXb~ĈQlp I1QJ& fIwkHD\gwJ:lZi&-Ŀ$톃o uy+i0cmRi.JB{N7 s UY\9Ab>DiE# =Cr{F;&wyud+(!}C(ڻ=uM¡҇=_?-ɉ(nvhx>=-Iu^Mbz![қ-R,ǧ.N1nM\d>%OyISyIf?0PpS{`˝&81a4ά)N1_ YOy<g%}G3vë_l=L RЗt:^`z/ykFl1<*sU2:QsG=ǬʭU0r˨|<<'02H I E#U*&ajPsJG]0 ?΄e `Ύ;,GRЉ"K;dH1[^H@DS<[XEFA Gv抂+R, ͡i6rN'x۴ 8:^z&@ZRB);?fhٚkGkVj:M9[Qs=6i)xߦ~҄+(tW%o TtDP"Jw=GCr~oe]?c;\L{=~ Rep Ħ]5!ZW!/ß1 ef޳q$W}9Guuañf!L?֮^dU4"fp,9S]έPYwnBF%bi«iPb)/n0vj}ʩf޾A9w}kN}㈉fsTC6UAb %נ3o=pySBET@DT.ժ0Vgyw,}N~Y$ A`YxG[6u(pjd`DCˊ9/e /;s%9 )[T0E-ɽQ@"' W(=9)&4jl<_'MDEz1y,lyQIۡn](bcOely6 d~{ڎxc;KʕnN/_u `DGw翬@W'~IZWp,i?=,8]< e^ޜF\qLOO?ۻUO&]~碸Ӫ 9;+l} )n=<%g.d١>"RVu#g}}aJh+ki}⩣L!q㬜ZtNE= ޔB <)i喧ݜ|xr͛I-eՄ=IhRz'ˇ.֯oJ' 9?w7|~dRƗajSl#a(o`j aFb0vkyRŒx^Tڶ|Ȏ:KQg);,eGU aLt_IrH:ɕm]4\z_0}G{QQ?^*ՌIK*3ej/THJtpM`3A[[d#@VElaE$?ƒH1Oƃ/E N B=X \ .DyIc-E`|GG0MbM:"KƢS%p%t2b+nSss夷 jN*^'u9> ?x3ړ M^m䞫-yYBqRh$Kq7M%l\u|#iGN-TMRј9ySycC4XiZ0Hs`KQdjíq4ZLq~!؉5{7#$c( 5dt͓ =6ʣd ^tcay4A_2#ٹٲ0qlӼX `i#lѻ9aХ'6:`RBQL i-=)aR0dڔ"Y>2 0}˩u&, yK@Z>$'m(;ᜬs+"]ݒ0X(I JOU/KAjm!'wZaEcc2'?-$B[^:A΃".6"ΕՆLSta0KX%2xIdq %Yb4w0OcLGK]](qͧsA4LE%zg3t|X]zUL[%xO?DՆF-?gZ[pߓ|V\6$4ھI؅lcY(N)Nκ-V#Ni}#\*nw5ɃX0!.(\ӔSwsw_W?BPQrC?䠨N|j|SO7^J Vf%4,q8+1+1oy+PJj9*oN9m`})EcDlI@6nhjU#v5%R jf+bUW8\1)C0 ^JPG1X ӻPod rBlΛ^5£wg!A6O^*}?no׮[KcfQYyGFh3zwm6'űF@r&X;qцVAJ˱WlM7N`kow)_ [/{{CQVͧU4؏Հfp䍞i>foӃ# ?5BDj^ý8r 6e-q fBdG'f=]Em&E1%+uX]\, f1L %k1Ǡ>c\>ir|C ZUm`&vtð۠ܰ;0~\n%:bdMOhQͫh* /r 5"!J8dGloasu^Wg-3W àh\=cx_ ~ۢ]\<5yfP3!M^Q"T2rR u1MYXovF#ZlN[/QǛ`&wfs=iD _L·5$&zw-6 ~%Y~$K{*׻(G 2¾ 9(!GMiX~YvHa'Mw8_֭[Mp)єFO偅2CuD$X?A!aQgVȀV!YWBEkr+&G^7D]Wj΀-]\L`!V ^7ӭ.ȍ T2-Ri8bm׍d7L q#!%yaR 1"ӳ4#9q ` 7٦(ɲӓ{_>~|\vݗ:uoir<T/w.~z5,}~}5>ФKE~˜#LbO6zȹw[=v&?$޾}󆳙zF{/7ݗ7ןϯLg?bro&Y4t\L汼%Mm\@}bHL={}\` >%7gL? #6 RsvY\|<|>_"1E뎸t~CGeX%K\Ȗwm\MZQڸ+XJ S5J|_avRm u Ր yj8G@[!TD[ - u mTj; WIZ6o?rtC5NᴪE$H"Y%FVTPC'"-6?]qκX]+jTOJ{a+yzebHnxw{o HyszWN~_~+~ Xuox`0[5Ci2^9c`5ߒ$%L5>C(*X.WtD1wyS)/ʑҜB-V'<@ JƌeEuJV@${eN`kδ"ĘnqP+(u}' Yƥ6CB& V'ޗc&@8cF!RgBgh5(>̋SH ZT^+:L3R_/Yd `f!eI%"\?{Wƍ K_vIWÖݺdJ6/0YҒ_cHCr(bއLiL~ZM$n3 S(/tDqeAj应^{Kw;*蔄Ҩkd C)oJYUGOo}Sq/hBWPTT5*m 5U8@˄h5[K!gv(rGp33 N< 1BJ_?xBiMMscj| P48Y1ty 0ITxd‰¹, GBBp<[GWpBi/.Y-b2 > B .AzDykEu )S%|\hR9\ФiEyqlQFs-4TLpW\D\#o,nߩxe`aS}8a޵X.FK *iudXf*p^*|hR:гhN3;&Fn*5X"C*T8];ʴO.316Gp5>)8K=!z2G!#Y7d&O/k|z3,.o&E2dGr rۼYָ Xc9o:5(m8T5C{AZӼP0Ӊ,GRͧ<}t)َT/v|oj&VdPEKn_& R[-NdhQ)wO@*(L(,BnO{I c)QR(iK=KTQ(y6K/M Ƭc1n8,mV;{:Cx~O\oR3$`PO:S"MIFR|w, }X1*YIO'6C*n>ykHSsmi&Xab$ߺ+_k4|̼y7W'5wC8u`3Be7"H=K;8J=An|b$hp9B3$DOzfٞEd!g (pRh8ì,dNH4T:-PpuF$L4B+ZJ09L{jA @6I=41Sq 4"̀l$ '(P~VhQx}J3}rH%)mMаf"pWyg"= 924iřx\bI4 JjSiɼĿhcH mnh"3$ex{gQ H-0~ hibX[}*eu72fyqɸ4R3˯5xbyAˣFZ}u:=1,H{嵜)f @H,|P;OͨB2L%fyQ@Zߟc= D`@X"-]̔ZNwb" Zf P5bi!E;r³mOcmOU%QgcU9ݰ6QU3y6vZDV=GPRXuXH.^tK^H)- T{={6A󗡥YFmf*T @n2TZTP nQ]LwmPㆥmP%4uwёj0<{XҵjkT6ߡ D{jR*uNP{xY"/QaiPO1oQJI]h<~"x&̍ݲRRn!՗Zb׏t]VcY=% '+zmL x2 cj\h*5&$5F4.=kڸh8 05e`KFuf=1R²ڊu5.uqO]W+kiFrъzGշ;UKTC4 E)]6krѾujnM1:MQź 0tZwmݚDZ6f gZ7%Gaݚb3u>u.tZ8ں5Onm̐>E)Wtxxt{JITiE k>2;CAwZ绩 S8<ﲼ'0QGh47Hr篾Jw2*3".RҰͶcz[&JkfCy2 ozv~dӢa<%Ц:i-CjkKr RWXrtGDTH B ꢗ@1]F:1j`X(-p(zbi|0XG2*2VR'sV -vfl->:TjJ;ٍ} K|C ^~0]2k[..w&~!@R ;]HX^Q_Jʼn>z)UKFHjCcИOjC.oR{+qV+grI91"͘J$%D*cV?۟b}矋ȟWKB?oI2Mmؔ44 `seKS#8ѣ1z4VV)Fnm&LU8%D |O G5zHWպqW3?<Я|HTe˙(d6E]<2 $=rfGsf]bϙ εRݟ"/DשQVbhJ,1T+nȩ R΢}_VEV#_$*;u 50/~ռ#t_cQtH#}Fx9-쬈%YϼnU,:it.|k!&Rjrj. W?ޗo_7D0ZFDCۨ&@6+3 ʖa} HBL%mCrCimky81kDjfϾ2I\J-(yjц$+2;EK*)>6pG"Җe2 嶖fOttM'X⇁-" zXaRE:M8ZЂsIMQ.i>Lq6V0;vg*Ȭ%-37.j A~,[\qtȦ:f4A3tpەCWp7bPnV sx2Jg#pU*h}G3O9Bp]`̴X8cNL'*@wW;Y aW&$d` :H-`m$(Qدt}2h@V#:;X@fY7ИuI(򷍆Z'5ǘI 5*Kl T[`lS"ĉ&D "&*h10;jQAΑ߯_e V$?OaRn.dqy3O/7 c<{.C:jj*/iɘRЉNY% m 2+γGPYjL#@p"' 8bꢦ"B,}&NᤓV{'Mʕ˼ xcy8 R2 $sYǕsVS '}W$,|4tXâ,⇯} EC߿zEe~8¯>GJ>5FElX6*f~ τ7gn/:3w냀%HcPM|#BBbkF ǒ2-8E`ӆZ(~V/HbC4raIj9UNீI ӜpuD1K# #=+2Ftd&IgK(%S ZS>Dk3Вh3DB+0 Jn9ű/ˏalt}{_~7wy]d4_l\ :<,au_-oD_}Fxї>Y]2nLV~{Er7O_y и)ӻl޹W @F|x͢߆)su@qN>sz^Ϧ9NRJ>ŝ 3aˇպGju_utD% :Qa>7(tlIv)+:N`yϤcUQj(tUܗ`t/s1TI_BC2.$q13i( ]p:toL a5E`Aa63h5Ln-):uĨ3Cal"9`f42lC : 1M*ۏ-Ug1y(|m9!ɎSlGUpڏUEn¢m`Ρ`O!TO+Ln[4N?( (1:y v#FivuAs]>NN3HOTM\9iHp!k2W烙l*i6dl0uz4SIWq%)ϜHqV\ ]YN}\m:ֈ.Ezqާ ~hdieߥ2B97lTғPr 4IY,ؒ3\m{)70\š|b ܗDā:cWs!5Egg; ֒吃rpE~yD!+JY0NvC?tlV?zņ9!R!S| *>BkTa4z:om+]#Fk DBDV Q uw oANՅF 1:tX$1AD=uE 1ÙǮ>ǰ-m}[B`:# jm)yN)2V0=gCХt zyh{?=ҒJ215h43tԋ Zpg巡b(ʼn{RI:ƞ583Np4Bع*/ Ɣ 'm$(;Cm@2T7m@ӋOj lhgLo}0BtJD/JΩtz\N[B iih,'T-I gy[?-e n:Ab=3F-fNqsb 'RQ=[\(Uզ3u5=E'@ 7[|ewp\S,~n? ,+x(\b0Z&-7rv%3F'@ݰf'kfρ5Z&S?; Nyc޴/'!}0f}g[?}1_n{VE~L(gSŞ ׄAeUrO ڹ8M z66?!%]'?bZwA4_/(9O0Ǖp~}9 @0QC oe$Q=+"ƚP{Y *_YM:(q D]z;+Ri3b`.,v?8|xLF*OI=M|X8DŽ3IoavinqZnq=\CWF.(%K+<jh!ѿ|X }L:D!Oi@PҨDǣs;F+T@arbS,Mu24֓?Yh95Đ+|x_byq8b|x!M-慨 S5z8ҀfšԺ=HѤTUMH3 >c2%]KP+ +)o@ "R3.՘Sc W.ŠK+Q[:_h!pkXU1aƒtRrkכԧPG:~GCi_X s[?da[kw0uO`ο U׻QzI al08C^[/D{9ٹN?qgɟߌkucw7MN]II0$A !HuN#H) &HJPf QZ \ujBSrs.pDʂHK P&PPe\Aj1:*)q9"ux&5+N$e^; (=]z4z;jMkɏ@(^*.:Y"F &N8qiNspZND:,K ]R('D"5zqb{,o&ew5Xj:Gqz &Y+s0qoG##fJ|']b'a.>$*իSQvDIo]LN!nGK&Ez /P@zxNڠ\>c(W A(ٸ>1ƙ Y ] /DbBJ\%\}Y8_nya[\K%|M /1c c@&*&k-ya V:Y&TAE'69FT9 &'<«QxPI'RJ+yy#Y٠]UcV5-d=T {qs,GM^14mhcк %8LԜ")cI FS QyIG4X &+<j<Ƥ."XECJKt U:gjT+{H.*{740t#-жSY~J8nT9;~)Bͫ*.6oᛷ?m؀ܦCuLYYUBQL}S`@kSJe)h=:$1@̤Hj1bJ:g)TV 6kLs%|3 }b4WC+Ĕ߄CjIzawHD· ,ݣMfCu`($NdWRRH*֡:i):>43x#dȢ>B&b9N77q-Qhn6a%s?0]OʷP Ʃ$5vFj演0!ʀ&a?a{o NcXDeUİ?aJ @n@"kZf.3_OƸJ lv2nY͒ػT'@%&\b\&5ɺYipAq@DNx&+iJ=mR+f]LQzZtV1"BnН@U֝ t1h 7bADKcIHx7 WW)h퓟c.i0@uI6).LQR<@)8j EE+Zχ~fk)^h;5n{D3`8MU=΀D%u0sI_K`E+`b7gB0η40tZ +#Q7C@hfdu A,t#);f?Ns |,*K"nZbjƴb1kǁA#Am FX"PDncx Eލ"f`1/[ dhlTIn@ϭSIЍTc0$;0DacM,KQ6ʙQ+E 5G0D{O۶$g7*@/Ui[f\w>rQ1fc{pw0_G5j."ҠuHo!bmoP-mnF+囋Gc t>GdǂGx wh{׳ޟ]}Lߪx?F_neϮ6R=J4̵XH5a.-o v KW`[2f2[T:fA}s_Q nhxK$:f»ռ{fL|km=vfV2&w6^Q5 ; {GUa,GA vAϻ[$zѱ=%,iZZBm*]Zpetь궢teNnM4!/oQKg?ײknrCM?.#>(-sgNFw16k|*g?u;g2>^䲛7α@ɭA,oQJ K-dC~٧0ϊ2$O.#aZfsԥDepOř(-m#fT ۾I\B)@D''jf5j&;.W |JWT;õm ƈؖr_(8Y0jVhcc//a |,QsA"t<°Ð3K ńr{I˓.!-oyr.* I?rҘ)HDZR4|,~o~#75/uҶpKrq/}pRr9ʪw[1ؘܘ`y&_(G%(6}xZ.hGihK3^~6 mcD   e8=o9%rGvyGTmN»S!Lhh{6(SdHu't1AL=pC3Q#m횙(gؘؠ?N{jPL8R8p(0j,'D)J;]IO,R4/ݗK,]?΢ @u S#)ĴCbL;LA3Cmv]q X's{0۾#G9 .#wٞ}aUl۽v'n@h5>IX7_]W%=OO2\聃=#AՄTj,tGJ_O@MCuuq%1tyM\L*)m%fshd -<=mnGxUckf&rq<{U;>{sy}SxaZʽ 1+~;ZzkY:PXwGQ|IjA~)MYhЎ$JxEѝc.4Ã-*zD3l*ugvkf\ RL'mfN>=٭ MMi-PQz]JϜ0%3dذjy~jSPsD6x "Eᡔ48C)Y/`$σT-?KiMC5"KQ`s3߅#;Q-G5oݽ@u!{eZ9Tig?mɤ_yY ѥ9+_ ` 2wk<  +k$Ls Uv\]=p *]=lklnR㩮nRisBz&5&FT=)6ltldlr;^' ]Q;L4'٘ :tSlo :FS.PLH>p'X|IX1L#Ij-€`qT1aʲn(J)M8k TTm&9;QZoMKa)6gID8/'Y-(HU*7I{:vJZ!k #'b4h T8i'~Rٺ<-(r R` 0H`E!,L^Yp)lZV|J,5QU#3ԵbL\$Nn/\F>st.֟wo3[xW [|O|xާp 1#0Bo^ x~X|?W)xo^O!ֿse߭&ܬP:\#[TEqOoLsTKFY؏'=7Uufu4̚R JF2>#xԪ ,+76ZAX7JOhܮɮXt| 9D_7%"ƃlRc>8 Lvzj]deAfVVJ.%a3䔹ŹXgB@0e Gj)l.CQoT>8yII:8X2u}#"=(I! 8u%Y1q O\ PLT @152b2kӉ K0ZWhd5H,/&"Lqu[ɉP5>oCUP@ꣳ>SO8\E8n`T3WK,S&fߞo?V?ѭ.<-ONnWeqzE R'.]3xA}j)wad&JK)x'OH-85zycY|CJKUL^7m'Լ&gdTB_g[2c,TI}TI}}RY/ofNY%/Jဣh )% KP^k(KPPX["ZgyW|y}6T3|z}Bo(E*[gog\쇧M9uq#~r=֭}!/sw?zkey_=)r{zǭ䴶kE~__߾˫_lz3'㪪VmC}稸jѝxB#5 Mױ,bx$w7)5TLzEÒ.N%`\c RN5ߒ 8\1Nm)dz)ӵS%ڑOaDKuTDyϠp)Q+z/( S (;AZbYFOc~P lls7^$1e%.*J.H)|/5wч>/(2ֽZ)$aZHh"3ح6?[9Rk& I::-F1aU8#2qx3mQIpbBpoqU!T=Dv>,+76Bs=݉$mCޕaBʉQrqV:scУ6s^sQoLg,>b2BV YnERt0Gh!- DȞB&ɘG~kq)JQL(qس V1Pj^{lǫ>\emVxI7eUO}*a iw-o7H.@F{ Snw8n!֭,4#gFGb7{Zda[mvSbz |Rl lcRՒKV;v4ݮ>K!tV5x!VmL1 ^Y9i;*8 1Q6yf(9ҳSrPB 0`4$x܊J435r䠔4TФhƀ1Y1B\zJp{'JLY0zE\-${-W :04@Zgj1Xo72 Bz{YH~77!$ TQo_0NPtQ)bJeD*pDMiׁOC, \P e2}L,OO%񶶑U~ӧVKe#?-hGWFKn~Tej u"aȂԤP--=S A$e0eEϑpԼKѽlv<%# ХA \SH\"I9ƅ+edduj w8I%g(GwYtKALŔ-ESh 9sA2گccJ@՝#Ap>E3AϢ:fKE\J#Ĺ/G{2ҚB~gIY(tmEsqb _]YgM[κ'ؑov.}}0'ч*âZ:-A-DNK%&m[|mڍRur1Q[ Ңi#nѨ mڧtɁ:V||_@ml榎Rs 5=BnW"^*ސA%)iEq9_(?'/1Łwܟ8A/{|UZK` m0.-؈1mAF5$',2wF`Οʖ6JV ԕ5{sf{r|P-8i%zj8ͅբEoN\Er{S!0 8\K&T1 ^| |$ b/ l/^\,xl_pMƂ[цIĕ0-VrH<צ#9^uMv 5+;^b$򬗌)fEsВ.Ԓ>JњJI ZxU"j^u |l Qր0*$^>*s@\*ʠ2rU{z}$|G޸ Y\=ۉK.IZ 7hܮA8ҼX}̃4_ot}ǝ$;yo2`> {o5#Ї?n[U3 l'>~NNp @MʦyPɻi9xZ JL]ۄ:ݳDEZc{zrB6q)AfEݔllߎj1(1wtnǻv:m乌T MƦT#o=>VAԾw; >δFл a!oD{cR (tL[XiB EXnDiƬ+vBGmIoGr;'P3 0޷-3l( l=Hv2LGT߃ :@J -j Y йv=ܖ>;vZpeXT`\˝KT!_|о\˾Kr kO.)h2ns0nhyt(EAtR6]gΪ'eCuVQ9E. ^iVgJJf qUICܑ\Lֵ瞪Gs)8U("<^ YvO-bى0{m.|:w7'׷7_ֿXm<'3oRožm_}Om]SZe6!Vwy1*o;5è>^11aD M;N б#ؼ_ ~WPȜ!ɬ &^=9IK!Lkty1Z%9aa ZD+dZ k%x)-Zͨ jt,5d6/E8gzLADTlRDjU(ʭ$NvofK:%:A]q!Ǯg>:~= Q`~ryM._*odz3JNSrJ45|[(gښܶ_Q.Ǹ_\-Ikd_A`F[J;V(#Q(Q5Gne`~[E>v4tظ?XfjXfﻝ07 X)(9+dhQHJIEn2@!KkI1%ᢐ[ZZmbϒ9~%I!6DaW$Gwg,#M3r f"WL!e3i#Hrw `NC6jv͢)ZZ ַVc`2[n Ђ1;U`RPhk SQ[@ `F׹f@:BgeoNt^8cBX!Kp% 8qh%2y"9)6_'z9vSz:y%#x ,|6]M;Yqv<1w+AV۹[ {Nt" }(=qEIVz :kyt;n_nG]õ$Ŝ®T6GtȶLj~JבʬN^Lk[P&鎇6g0 K;HHؚ[EƳ*:k##@z,TT\3|/RϹ\Qh6HH·Z!ydd:jBP\x!qK7wX_ ,esgu-Pȥ`;N3 ] ϙR9]Dž .ߔm^(~s{jbnf>Y,o?Ѹ4o'c#OB~+cV GN[2k$Mϩii$hs4V~ Rx,T ϚA.xYM:U(rm\X3|>Exoa?"٤b7EqyLP(vd*2s,Hq0R>?Mǃi$ٚЫWWT + (]~:)?}_$-ؕzLOLdm[Mkr>h_F#pw5vōy;WfPI& d 3%v-fU݃v0<8yDeƕ $\0gCΔG3JŜ} gN8(~Vog#Qse}Z*rkΉXp;|Xࣖ-ಡڬ ;{٪L!Dfb=BT) =kBW1{F@gnVGڔHyb64幏l/(2w2ue(:>E!qxVUΖR[x^`k&XPƕQBDݟB @92 (-,,hI R]Hk҈,wh>\j9TFe@a@N5Pch$FSҬ1D0W: pLU߱l9_n3cs{C>7Or88܇x1PIw_ O&`6/oIpO߿B9Q֟h۫F;AARB1D0`oWE{%'ru?y(p|Lܽmo0hr6Gp'^ >!VԫM2 %$q-ܡI\HUtZ[%F)T RVI+\`{bJKmnb+i D! n @Űv!LF˭p@τ~smn;^ ,0ջjpA]UUݧ0+ x] SsNπn^c@ ;NJB [dSnH^H.\!y ζ.VQMP0{t`s%4LLX."A0 vGjOXgBj RЌ.؝݊ K&SP-`-a-T;1}M9#(yvX/"c#3mF)YnPQrv4d;ރcyRe%6Owl~Ko}ş͛9c7fMϜ '0'z欠RV65yhٝ#ΙE0HJ}K~QP\$h t}~qyreEZW:Q3h 6u x|X7hN}H+0vm@ K?ApN WH9cӂCgum9ГٵL6Z%ϸ{ p),z>jOǝXb'#coB3lʧ9E<\ EMӜH*5&RŲ^#-sJ @ ]~44H + *aP5gE D9`X@ )QءܑO2Sx"B|J^iP3*3jΊ!"Ea%0Byn$)RK-{8rz{q<Ղ` ѨIh ?MƇrz6#-bfE] Shk& !T~%+8`ZNZv4m} !27ƣFu5Br@=f|i\#c:14r`֝'ו3ڐǑx"u X,jFl[ƚ0v[՝^5 fDh="wβcEtKNCn^m]=׶H فWfia ̺ #N'ɴ(ߙo8|jA-ܯ~I{{v $*%IhUL_D#IC2!O^UdWyq_%]g$O5u{%~[·}iMCׂǥq#q<s =y8x"n7~H?SH}o ToxרSԔ>A^9ifl^/7<;ܢ~᧏f1Pv2.%N.L jNŤ$*//Ǒ  B;+kxs>,z* [;hnHP-Ȼ?ͻ5 (ʓs1 b~jdn|@;2>ޢ>ymGFC:~#7Y:?Φ4lMի=KaGR)3*@kל^?c[ 9yt"=EbRdLPAM1;¶sga_ |W.=V|/7W{l o7]|mvӇQj.%_^o\g,)/߼ 6/zSyʟ{r-7>>VXC,JԮ֙'kV.+.)2}۶vcA{jT bD'u=O[zےHքpM)xAGjT bD'uROami!R5!!/\DdXaĶvA-I}Fv|S;nQ"[E4I0?*RTv_b4M+ wՙ!:Yʋ?"yʃ3ISwX znyr/~Ohp%Hѡ^JV3?~"6WX&0{F?eN[L*`2aTQgH{{͏ 3[}7Ϙ.[ %' 둎Lvd4:ti1XǛ/QxxF!=ssNjNݞD]kN}v^1bN piBwXM>>(G)v5-I1I5XddYrb^nR̷z;2wzLgw=X ^Yxwo^}RWajWޭuj섗'ߣu) i-.%ب5psZ &ぷJGE_W_C]F3?} {d8 ~[Z}PRY),"4/ TDT) "`Wwm#%RWKuoWyU(E"Aw], {z{{o+l'=)v(bVgU@?<5\|{lҋi?yY@)~.@9&RQ!我X) ##&N0^S!H%-9Tk$&̕Dt,{O66IE/Tk;6Zc4z'EIC"*csI"<#f  F|U< 52":X`Q+g(8aka P:Mv6AqDTP#^EPRHUs HoR BLZRP.q ( \/`%'%"`'@HYp-Cʨ"H6g 05ԤߥӅb{׏`^(|`M -=,涋F\|ןqUʼn-H~|42n^< (_x=ADLV{1=s\0_, Pvuşgn|e3wK8D`m*P[OOn٦k#bY?5} jјd˥u~t+W5<)rxkfSF+&}Y~F{ yuKjZЇ]r7O޼}m>440Cg1WN\)^Uk~Qx,VO' [2eHB!NI ;dvZ^`Xjr8ū'c8 [!ۗKgif!hȖKUQN:]_?lmw RsR侮JcW&Ђ4&V_/fY#@-%ℎ7hSou^5LGs7)ytthCpeʽFo趂Tv{wZU5=nf߰1i") =tXsBVΡނ;ff+25_憮 ˞Uu!of Vkkk4AW))UԚvlj#qSDTOc!(\˫[>|֑p{7ٯ֓5>{Uq1YF R_aR.29qq<]b"/_+ի,>Hmn& w?QL΂[]jX)Pӻ8wUȒ=5ebzOo6yu@UvAv5k@`kԋ)hj9fS9ȗ}ؿ̱͢\ ^(F7+n<)Y90(cJJ`Y-+%"3:1.^35U4缷99~2,hF""R/#V`yܯ^M-o6ܤY҆ٚ3{ƥ%\j U&4=_(J ~Ly9(>dE}rP34ϿyS)l[ߢ%/[!{U/I"M.N |G=` ܼp\Ii6|FV 3fQTo&cmKZs Y!\1ͩDQT+ e2!_ܸavEmwr8k& /,*m NRo9dʨ"I #}xKiDpJ[x7[Y4nyQMV+n->.Gc%Yoz}Gnrd,?s[LѹG5U1/jNHBROA68׶?aUu,sMFhx`,|BB^fɔ&wwHAbDtrF.Xx:v˞hSօp͒JDv v< l <.m( c`dUHѵf7U)Qu^A݊+Bz15fpR =ЗdbuzgҒj۽~J%muO+KQAn?O^JaиM ,U8jJa1 vmj*= P;\KZN 4ByN05SmP֚;s$pmiOԚI0X] Ey0@T+` 77:/LL}ڲfal`.E": >}$B|}2-$]z`7n@ =fxSgt #!;Y #~>PuƒfP|IVj7g@b\cئ㤾OVY )fi-վyo˿GSIz) v)cDZQ!̐kb-.LmkV%5\ιFs$%"t|!F XG\ Ū?L)~57]"\1Lx*EU snj6E|$!f#~xbJT;fh&ŸZ =1(wXYYXl,C6 ٬_}(Rq`hdZ9){18֯>Rtxt=8tB1PDp8DP0)95Y٠F茠bsLTAE5AyRHJ@9K#D$Ž dBEqm3k9g Tq?sM^pWRQ##Y``|!ehN+T n3;.nZz[=8QڴAKZ2RInB&q7` I0̘UXS-QGAGJ(@)](A^y#h% ?WʁUneo=rVsߨ3]Mon7x_l^O jXOP07-AWIlL&}sQ1/ Pl >\]HQXNwa^(z=_;Qϗm4u;,GXdT_\PvȿRR맰r^_7E4K8GM~QsB햋A}G6 Ow$ٻFn%/xO65,;X` 2KclW,;.oQ喥bw`0c[mWbYеE}kn]H.E28QnJv+ GtJhnvE߷v+~\օ|"Z"SrWmtRf:IFmHi/?N*^ɥ'S uvjQQBPmznUC$ń3e4"Dj8k%jDP)(LrR-3P+`O8x^|{\JQ sǕ 㺐/\D dJZP7ڇ`)LfʦDR;okQڙN)P1&)#\;jW=|Ѱ/[ !#+cV:V٠  bqJI $tqPE"- uZJVy"`FC I%*6{@c!h $B \D>B)S}pP!f"شwLoo_'ڗNR–e8nyu$b'˹wyy"dgrQ?P0c꿞=7}hC [N ~s_]'r{(-&0Ǩ!77W%~zmi7xb1*^ /vm5i ϵ~llT^1gy:k݂=a6y#nxzpn'F-8 Rz<L[,S[(0JU.[uTkWࠓn (ߢkYŠ,s^>,XZ괣@'(#) 0tUFE]HG,GWYVKKt}MdpzId+%n37BC'[PfPmb;={ng j-c'6+yYhZOR۠`p rǧTK( 1Ȋj.UN/jpŧKH.E2`7-Qb#:snJB}kCLօ|"Z$S\[ Z>] Z*C'WT$ZT腱 +|e?A|Ex: %Q:ܷW< -c}/g!bD1&=@:РÇĄgb Zz>C֏ORv\k8L4?OTP %m2޹}L%KY2O^JP7շ*BD52BOSVi =MYh]OtDG+4ypPʜ߆MoSw+BsÁ` + rǷ$a>4oGzb6[Ý7$(bo>leq3~q X!gJYB s Z );UgJ~sQbOo(4J܁p. ތ{ľݭ`%p1Ldy HA/PpAѴxKvV` ₲ބ`Eׁ'&d cq[v Kp%7e-as#-c75)h|jk&b%C(R=ިT\0^ӺeKeO [ViXFq-D CVԳ2FhjQ(*T~Xi~Vi}ףJW UdA2*4DIAQ+1QT,;tw_O!qGE`Km*.U̓(CfJIE"2kd4kٜs  T;Ny sl97gvb!CdcŃ1#oB xZxwvacef z)ls>sWl@wvOdtQNlއcd_Q-XH0&6y|lz~pf624G T`oO_,VMc+1kA̬hmu2뫫f=C *8JX(u ~zo8SnoymQ?r^ԁKEī  =$&-(0;~\˜_|fqG?>!M)I s>(Ӥ`N9M 洮`9YSFQ(*.D4`ؠ8R(shB+K.0gDE9K'"*St1\ Xz.g{II-k# bZQ2vDV"HdJ[RYc"q\DA` iV9E!P08lHG,ݩ6F"Q E1UPc'"%{~ .ʐ``ARQB55%xV̟wzn2ݺ }ZrVIcaw`tJpa+!ZnW>O,D1&DN*eASQf8*!h6JSjJI~}:|I YZ M"c2;J&X?UMKI6OG֝=wD ?ԇ[;;Klwd71&/1N%AiDr=\~2W{:x7}a[Ds/+2)3gӚx'PW3`M*@^s9#4̗7DƆqʳ!0F<F+*:z0k,$:2/m1dBD\!2Qhԡ7Vp^ViůHUhrBW+ԈPĞT+Ć)oДF?05 *5S8vT ǩa Q}xY&,xڊLW|OdL2P݂̥zx$,؋?TBroB&p,COd>j!pkU4-KESR,B3=yz:9.$o.q7=N ??{O|o <:D_-kG;~h+q[+OYr \XNn?w+n6} ̠ ZorߠT4umCC㮇Z,hEŜzJPSOmtqZ9,!V:o?\oǭv[3%PC؀?i R3D,ů `Ŝ9Ȩ\F LA+l UXЧLrN\Q{\#h[\æ#eCi868m~K]Ga9ղM,?#,y%) 1L$e, $8)!bF)0L6_[d-nƶ| Zw.ϓ7zH<\UP94e9f '$Flr9QAg0Qq,, #}F3s00ᝍ xXʯ,YR4KSXXE~LDPҸ0G"(>nJ)c05ߘ:Y ZKUPbZR0xf '{L04mTWmSsJC߶.#R>HJ/g\jP'$Ur=M8C)#div2 RԑCL)/%Vhj*QRH֫ia<\؋<`b?'(`M[Nzлc[-G\$[-!Q(>yKH=À[Nz1 eeK)# "`Vc5`d#RHmNf YP!RH6~ CA=yBn \b!d( -/@qrΏ87iq CuK&w-nFX~8 N˗\w^ͦ8 yyW'Q3y˽߫xʶً/էj=*K.BﶽwVP4-@.!gHmDS0[ht]VpLCj96C(VwPr8;щFrt.٪ 3&$=~m1\;We[ bO*kj.l!WeI{gތSPQ#fgȻt[E[7p82QNf\\j@s6d_o_0XFBfgC Rʇ3iFūL]\Pص @I[ jᘸ1ڲNՌL=w0VbŽe0Vz@V%'>T&١K23DYcguE3s},M_Qioƙ*O/v9~O)eg;Nol,7 zc^-ԪVdU;337bo·mu.\,MUIKTnVzȅwQ>URwu>(+] 8b`u|2 wNK&Cn([=»O*_Ϗ<|1:p݆Emmv˷ݺ!E[) ]j;?U ebjn10ph2j2ie&CFÑ}5gzQ< N[X98Գ;8I_Rj$MliֶIS{R-[ 2]@SD4B#!XNKqT K IX/g2s )y$fARu0U sJQ%OM8|+brcH"`D,L!(+CBS84XSgPVJ*!@)s!-P([,(~AϞl>vU{Rz^*Y9 0(e9n?m#)5~k}/e6 P,w9ysQ-htMɣNG^} Vj68x#gkFT)ZV' &)_"f/O_efw6=!ڿ5jJQvW&\$\ծV"n+p_=j X~sn}=x,T޾|  \`޽~@K.Sl[-aVcλMǙxhej9NI+RR(g3FYCpF~6AK$WxDz;2 7@)T[\Dr,^u T*:\_@`ҟ"$$G9RsT+wKTk>jJCb+IpPHs iB*âvP]zvjCeCɭD=(}'6 w@~WGv#tnpPBOXgͦHV=&T„. h\ 6N5 6Vҭ=~[qޛ_`?yXQCĸB Y#X[r3Bt n +hFJvAhdP\Hs&1$~)E)n|@>gk^x:S0}mtA dpV`Pp%#[<3șːWx4cV>Zv:sJRP"Lg,rH;me\*Xjy$p.i5\lc1ㄨM CrGo<GTtf3>LV6@p9XlKqy08 ZuZ0Uq@ 1d"cz- 9 dNi C<('t0|}Ña헭1~CXWP?.K󓃺]AX)hT}6{ÏzVOOiݿ5$B: 4z!2Nב\畒lHF8E ER}߷݋YGJ&PB`*Rh6/J8 5P: )% tJ~tu8rP}}`WuOZk~|7&DIٱ NI[M='@|Zѐ5xP1)N!*nT$%LpRv#(iG"Ÿ9r@*l _G ss "@V%I0Gh:P@v.}}@BzVc)!4l:^lglq*S"J{Y{|Z~ O7 4~=G\),.3ІJ}=T^zF8'K"n\q+#8!*)ы!<@PBr̒H(O(1K!IJi6/>@W42g7) > =~cB6"]M JFFKI4`<~NJWӠB"8X+B,v' ==}C72{분4_NOXqZ1 *Po9E~V7&I&Pc,",# rS~tG >2@HGVrq HBL.0@#U;; Tu٨~.dOg.6)Ɣ2vwQOI[s\.Ɣ rQpY,FeN Ih.je1ć^ʖ1 G }G=Ltuuq.Jzc/ĿgXR0K9d aUMs{/aIn~8bcMg5'90·B]aTr"̜ x5m*&U )+Uzt'^n*2ٷ2z1KNi=)̊rg>j]=Sk,u?:`7 `ؾ ]dPL"!&"cS D k<ֳ"ՏiZ 샩Xb45}?fkJT4߶?jyJ`%_ȞVH72HTF_zBx:i{{)P _9Rc4=,e{" EDUlg$9Q$Q x8JL_]Sؕ oaKuLK'OcxQ 9JV_(}FT,]Sɓ$ـ'~EV45|cDqJ?/|zw7j0}ib>S'q$ IvXI#2t:CA>X~ށ  5tݼ7l-]x4Tn[1BXЯ+ku$cˮW$v^ ;cg&FABZ:%;I% $( IUqBưnƻl|iY6_kޠT 2#@(b9oo>˻yrQټnmKqǍ`,I$f&(bJ3B@O7 g3PeⲸǛoۋ-m~7}*_nP?UqS)'&OU.rNf2N *8I0!!ghPLoL@Vvg#WO 2[onXKi1cn{f>~V,`B_`~ٻ֧qeuvv=5u؁bnLQ$$'s-;$&by]j ,~-c)9H=)~p`*b4S:!n8&W=PFr8/q:) ~R>%H( eV(m-08㘋kQ(#Dp*=TIԞVaG1(%" Rk9PZ  J@Hѧ!mΝaFQQƉtՂ(!--LbCD:LG~?7rU\eG]q @ጅOnk8 UD*&'׷ Y@4axՇ2_@%t8%% qp8RAzw{jajy^9sWk73J暂1댁5A!,fƌD lL'CRL4VXQM-|@,Tvg [|$O„OPf~I}}Ҥv2i*4`(·Y ӫA^zi3̒87?wq_5p x0l&H_٬3}CG\Xj "YWDfazG it%m,7[Xn 8q@0ef=$Q}aVׂfz,Q HN&8` rgf`}eIk;@IJBzBR`PbsI1Bc+ .%(Sl'iֳZ(XC!L*Øb G7{I׊5mŬaG-.,?VS~,`5PE8xygH>4cuIRD6y[CRyPIrDJ񥴗ΓbSaSIF 8ʬX0kpX)<5{"S# 1I#ZsY̶:j[eV4m{]R-:ʬS04x窮{HְgV }8AR֔uŌ J\Ul/ҥBxY_gӐg"syCfqfR҄8IaWT&?} N^w@3AK0*O:\MLgk~X`aeW**s"bZd(W&C9]'iA^tZ&m)pgM&@B@X1z.!弑p}+J@ͅ\bW~nAx {v+kP9 o]ߙa:FGw?ttԤMhj7}MiIzj+6tlBbD&&tsM͘h Hɯm {+t@(5iWNCcu`%B5{*#q .l"t}#w۷7n7k 9ǪqQeo"T63d6 nThs  y`9;b0XpوE4T(#͹yVQ˺ynk+v'!kj/lk ^9e79DlKX9l;(Ңa̫y9f6hJR?kêzBR#)3X] UpYmn,V}Cւu1Z/zouv S[zVxRcu_\8Z@=}X hEzJ״\;ګAG Ai!YeFasb-< %52\!ap*dM ZG]סx&J4C-Ҝ!&cC8X!"͔N%dhhd_Zx 4-%+\r4K;l]^tEl#zW#X˫Ƀ+AmoR">ިcaNWO1ӞA{5qm#w+=f`KSN(;^[-?OL>}ՙ#L_fo/ܰSsE#:cJlզ{ 2ntAS*mVlsҭvGJUg>DSK*5W'\w!%5ץ۹x,^L 낆\p{`;<:|z RY|޺~ws&B2=˸h?]Q5 0y! hâ NH 0@1<턗]r(֫@i\t=>am bL-7ȃȟ VɊfdɟÊgǶXQ[$qDtB 7)O9 *6EVа'ޱ|0EwI3c+ @dsz?]w΀Eȟ^p;Xa7M 2k&n:3!:h &l|Zdk䯲_~C`3 \Q!na]n2. %sy0>7 $z:0?,.N?+p'>Xh`ӈDT^0;ұC_ϒ.h&ٿ4UvuN^3oEds`jkAA1{MOl" {q%?۾oj0ѧe&nƩvmv_{8Hۿzz|Hwkwo;M}ٻ : iU#z/02^'io0?O}=^Sc86atݛ?|vLmmtv*&o}xON=~:e/[aS:i:pL0yQ7{i`egArU%*Fn:Cn/%ail}5(ށ}Iy|]Qzi[YL|=f4.~߿Hh{pNg~80ɵGqB_كsah Wݎ&Fu7^ֵ^;OwoN=\~tͭC)T9³n%i ڡj\A\]xx>g--2s޻9|,BK<օ·7,F'3i3(8\ߎd88Ib싸W07Ng'@߳j}Gh<'M;;!ۓ=ODaEY^?Ȭ G_2HANQG#^JkP^@PyXiijP:btou1d`ٍlBUU߇zG͞^:}t|ke޻UZop]Y{IMgx{zafTUaWa?N2L`PfkoЋAr ݅yߏIӑu`'ߞ?}zO7n~M:t:tzN:JM)kYmF8g2dtmϴHt$F47&UoP|}82'`?/X2awTv)_d&qwmB9a>N1we83y0Wy>ݩ@?d7?{WǍ/{=͑*}D{8%NزW_qf$Z3{#`X]X,-W܌:?_ K%)뻈d;/5_.岩S_E?|M Mn~܄Sg#xG9  s#Z;e&U9?rd6+3ƎD#adl+F;`AGxF(+qD;h4 a$} >R* ~* !kL‰v?vǓ.3~wѭ'u%8YjoLmqYףGjE}s>O_ޑY)r!Q՞iߏ٧kҦrAxޢ2nXZa€-^h:0IJM1$Txi8e[%,=>N?߼x.UjR6а1 (@llx`MD B6=0 !OND ;uuF[w>YW~إw>yR6W+AoaPUBeF)Df 0E樶J/FrR dcԺ1ràVI# >p~JQ Iֵ I)2;͝Dɀ'4})M ZL:I>H^dH+跤e))gfqvJB:#[ꧧ >\Dzp]|r-S̑g!+-6~|kW+=_|Om;Y_i}jrᾑYlSvn2.fdV6?=9x [[7=bwWiϵ8)_wMh~M 59v^3gR'3@ʗzʬc~EFwׁs\u\$g7drr ]V}Z(%sv&+tTj4 e"k()FehRo.Rv 遏WΓzFK٫r>)ޫWww@uL^X& aGJ)mw<`xFd-+gƃ"F^,ٻiPVȤk31ϺT c!PJRZHv;fj 㐮fФ x)Wo!GHT)XLF~Uhɔr[&J#WՋjY0EMֆ̻؈-PoQhժR>ˇԪ:܀5}ހc/ }sdKI11ϺTl<䇸+])lkg[Ent/H9׋9ӿE)V! JlkroD|5|5 !?}n}[ܺ.mw3}/[!]V_ѯdtEQEN9Gva3|grh#f N˻'&G$jx<~=i0c!Sf8o{y7*H/χvv+Y{@5oMw )D=J%aҍ-[wJ{ASĬO=ʎu4|Wvn:[FZ7OGbW [jtuU!ef2QNb"Gչh_'Gqnǭk*eZ03vrbF.,Ŀm_ss~^^bC SOUbۯ_-P+7uȬLSp}2xOwjmT, Y{)uh\HRq6VJ#")C6S*92 HD{! .@e Ql !"a*\,5$b4NtIQr6AQj-`2z<\@#dͥ`92le僕WVnpvp1k+࣐#܃J nDFKPc#Tb,y9|%{7 5**stPefcxڭ;jp6 ڠ&Blv=יOj|g8͑Z9_tZ餚;\Mܸ9W=p$\wK%zoL^9`D{c :4Nukݐ /Oڶe}U_?cuIw:FI?oMC-VCPruc6ʌgjxl'j}'uHIL$zsi #ٶ\ͱCּ/%WU

S" wx>w`pZCƨI-*ՄMnR#U> a*.>||yp9`dGʂ(ah CcѮy8٦B~`:B3#@4BģbFk%>s} INlol*E-a;lhn$Q ؅!5r\%#$j Y7U`&8~L&z~D 2 pAF/ڒm>LU}T6Xr01`Eɑ>A T4JD9!3m'6oNx䄑ҹ-~Q,Pn[O 9YrrhQ>1j&xXÃBqFYjlRzxR*_\Rj|R8g]Q8w҃RuR*q9Tb.F(-RJcIv beCdc>'D +U-rh IIMU%ʱbm=FYje[Aǵuk ebK1SPIjVQJQJ ꤔH); )}.`>JKz- qϺTG)=l)sMvG)UƄKMh;i,E0 bH&(IHk%Vb!JTeocFr^;`CFYh\*tP)PL^!ܝjAxr!Y]ͣW9ɳM YlI5E 8 AB,K ,&66Igց,19mQ0²B'#bM"*WlhSM%B>FYj)=z[RX,d.^G)=h)ER\SJtI5(}ÖR2uRJf)f;>:)]R CRx-")s/F('Kw>?N/cMUi:Z?ݧa)8xr0]o<_=4_{6JS8Ec)V28|3p{Z5Cw'!Weκ_үh;^X?-:_\~pZc,~8v:~V'@R8v_k5Fؑ:M;n6GH)FFḯp؊sYDzav-">u^Sfa6eilU-Tj~= {~㷦~ UA/.w>*n 1tiBROt*,DiC8Rj_)RXP qPm1ӉR)$}^j˘(eGDֱ^ٔ ~9OjNj{qѾÄI !DNb#mT ^NhYaA֝P;pfO{r`̆\\5uʗ^oήơSAYW.X8299!+VQJ'?7գXl,qLhAN%y,-R;UNȞu-Cʝ0E |RƼ$(,-|74(huJ`$+"C_0Wsb-~Ű3ן޼# Nhڅٿ@Ǚ:>d hٻn$W,vg;Y؞`'`)vjGR;oQi.<7II UbPJSîC珎>JQCgB9(0L=eFf$cR?y(XS_ oGp7%o# N+Yݪ8\&"{.&rVF_V?[yEl酉j~Iё:x9"5殶Fumjغ!WJWgt QJ]X+]ށgp,Y~ML2V<5z_n$ '~Y\Gσ1lºk9m:|wϓ?z >AZLƆVe?}t)nO\W\>16NYn>wt)n8Qdj t47%:ߟޑ vfIݠADga!INV=LUҌv7\b\'曱P?? oBhyYRgɷG-ěMм;JG-;V|O:zl5jͩa?^m洖FgDLhyjt:]xnkn >0gG-jI@jݘZ) zS]y6{9/>\}k䙠aZ,ƦL|jٛ:^-/n=cn[[p[-݆ZԋrICrS_X7>֕%Mh֭Bև|*I@ĺIRKȞкuAt}Gv(BÎfݺwY(n}hW:UxX1&:^-\6*z.f_ XŤ|uۥ (a@ gWҭ=YY]9ҵΒtΐkjyMUJu[QjI}ԃTW⚪Gv=T5(@_ǚIq&%;EMUUYLC&^SuyH*OWS!!OHެ)]EQ/&]l~vs/TnxX;>to_n@yX<(RhdNEeǫ~]HuACrЩVLYMg5K=6JFؕ#('^^ h/iQu'c"1ଐ'mF4!i3p<=ƃ8.Dddr ~.3aКU2`tIZ9&H0i)\{#}$<9HLk!}4cV\.s7-K|eUd Q?嵽sЫ"NL^tR6mdz$S535P*5[ŵ=\!bz?mC~(%S}]E^HǟV5ὔztpgSŬr'眹op9o6'k1_!hﮖYyPw|Y?/mbӷeenmaåŢk}KT/]{O !9 ~m)͇'.W]NP )}9,9t5"9L/Ԩ&3-<"/꒫g:ˎNSo>; E(}$l 'f/N"Uz]?sykz +#dY.x -0ܧ*ɻNգ`;yGuF]M*r=ìr0ɨ4C7 J8L~|6\o[ MbN{ƃ1UKmc!67) >Vp8<aOŃޱw۠\Hn6j\)W^oծtiU>BޛMC#>È!bp DJO ){Z]ށ ﮻`6[N@[}cz܌ \!f=ǡw6BК!a@и~+i+a T)c`R;+5"껷XlJ)@%7L<7LJ̡\O|=9 OQ*APvRjGC_0BS*SbU5F\O䮇ZK޳EJ`F4ZND&L V2VQB|~ # ٟg.9 ⱜAP՟f UR^gՏJySP7!W":_YJUt49=䕌C,#JT)Y?4YKo5峞~ڔOh( 1wsJIiSSss:7.vޫ]ve XZ)C8ܡq{w=YK?)-<@d>Ժ'Tԁ]$+爬&O(H8@|}$W}̺PYDz$*mb4ֺ;|#AMVh@iYM*CEO-_0BKG*8zB .8pr;/;%?7f\׻>5"zu@3W3Wj]t܃BNƽgbbPkjp1Q)fm*x$,l@͒8LGD3$(5LN`T*jɘ}~+PF@q >h҂TPhkE4s>dڀ+K d͕N{D *#HiZ(T&vP0H -Idž)d{ԹՅl{cQ tR<2H3j"Ϸ(&84*!cZޕì z3838DEqڮFg>Ģ%[=e:AHfĴ-,(pCaY( ;`xK^ PbܢCC⅊WXF#6 #ZIiZ굲(HNh%IHBk|3"䎂UU$u@ hZ-Y"G38&/\y1ngtR^J `q} D fa>V<(.@U2,*VV4L Q[۷nS6MIaʂ= Ug!zW> z&UPUT\[\JUuw^rՇl\J8WI<N3| dDѾ{Go# KU m}*椓Aفv)9/@ T'z4@uލ**T[T`ډ'8KAJFء@e檥?Q14zCy,%$'& 蹥|+W ʔ910ڣ g##n I,!\#12J"dv=@}Q⹠ԷG06-y I(/HIm$rip!mcxyYpB%Y .@OAuF&S/ A qDAgo;&jV:T5?}oGX>; j*"[rMZRx ݪx%q'|vF^H<SU^.-:"xt>gܦ,9~"D![Xkq{#†Mg|un=N%4oЅٻ&7nW% wUOK*KuN]ʅ0+ZJӘٗ!!wEeYk @P@Saj2#iDFp#&v#LK >Jډ1ykG7, %ӹv\LC?:dnED?W=HrH sD2?asVNڌfʹa64WkWUӕcwv6{kY*nˆz9Invd ҽ fX°[%wE^3J%-[H{N_aA#cc,͸dM2e)d2ւS`裒$+j8׉6--='4V/+#rEsF{漭uZ5⽛yvb0% G:]$@|/y_]@k|f){"FO~>=й}7qWcSds^D\.99Ra fS%jfT5'z]_ƀ 5%ZlFSy Ik4'H &gu4xg$7Mgird nt͓I4QԾ)FKv8A-q\C2Iu ,TՄ )0.(G`*2. :n' o8_s`T->52̈46h2gi :/AcIJԓ̌+ z4yw8є='v^8 {Km1K-&Vlh4z.2U͈H:j*Phx{xb-Y hREo7O|!01k'[34x&.B%(EU֪Zc )#:"1^M?ίaH| bݧyGjZ߮x| SWx\ZX޷4o'c_zy袕}q֋ k<|Zo>QxFoh0?XXO|WxT{4c vڭnȸ`͏zQ0.&Pa4h-}H׉ꙑS2bWʴSfRThj#TY.jl-iÙ WS,v#ve E mlRr5ĆbIIX`*@ 6hDOzy; #gM+3quo^-4ˀͳ$S.䖸)]vG:~hĺ>ߖ(&̌!"ATy)N)dlhXj*?iXΫkh菭ٖIf\al{&x.xIo]qiL{)Yp:J'hr&Аwi;o҇A>~s-ݧ'n1AO[[;^_nnr/n 0TL]=IrkA'OF5lOfÌG+痟;UL?nenR 8GMQ0L*ue$>\jY/Q 6~!>nIK$aH([p3=fS]ٖG1Algbtw7nӡjTX.?[޸&b⿮RXr{w}DU - ~#'LJ8G'Uѧ澏o[;*G֑eX.cMqE ڭ<fO7b>r=WvEfw>E!'S?nT} ѭ)S;rS'no/3;EsgOJm2uJ#'HhVގ݆!ݜS,9_r50Ws_)#Bht&$ ` f+Ƅ'+ox]3`T0kD׵֑79\jɸJ-Cy*@6,Q) !P8q8%jZ i:i(7 :Z¹[.#DaN5 ^C,Ͷԉ!Qz jۊeX\l퍏($2d$c\KjDDF0., n|*Cj3mڭJ):iJZu.Su۲DjMP;'3zp;fv9@ưx[q_-ȰCF,@  xb1qzE[pj]%ID^WV2Q1-!qNaO✥`VҜWBPY qsOa5r`jFB]5N9jVUs<8=|C(oZyNa O{ jVO5{^P^P*>,:)|պU BOa5+ +W+2>Eb)6Ȼ$o1IwyY&aXb͘\tW{Ö?(Ӳb{cnH.xz1Vq6K;X9#|GgߏvJit v%C mOѮ%48xr&B6nwei]GY҇S;1 (&=(<:P}NiX үw6|{d8S4Ұ<+qƋ4~48 9ijiDXgZ~+9%6q3LhFIZ ż( >~R1i)[ innbe7%(yx.8'TGxDB2> ? _亗?Ogj5bI'.Qz,",뺇REz:Dn=8c)}%L R2>m=Et5EUUda.?:5ͫӹOFUwcm$24o:\136.&ٝ+LoszsQλlO_u\1}uy.a}ˎK_b^xXWgfehx+{׋oqF#B!wtioRf ;2*\}-W*0EЄǚ-%[Yd/ W9%9J[ʉ`&M;+۹WHd{hAɹTI^Q>=)'dloߟ%\pϑxG1}#MvGu-Z d+qLwмx\TLI%5ҟ-fbk":ɫt% 0|%6 Rx wkG]mD& 8F "^F2=NXaYSq`AP šߴzP~ĮB W=EK.&C9B L2r:Í"} > é("xTl"31ۆřS(yJ~ e:_]@iXD8Q"aJB "2iji>"SN@(bQ]μ0_~$Z%1jzwa~yaRhס'gbfWdhJFi}yb'%;pzQXJ҄d&c>WÍJX*ᔨN9Zv2j<)hdPaJ#iDR.p D6Rų7yqHm60%x*%saclM1.Ѯ(bD+y3Qى`ÍL?BI Y(IbLBC4*B<דN[)`8b.c`a9/, ]` AM0}˷洀fr"آl^~xړ\Bڍ/;_f0Kרe3"t8MtԝjҔߞ8p:u{Dal5=a•W8O99B)@!NW%kA&̔[ %dĖH+țy Cy$wCB(37Kr&KYuLW %R3ՠ6HT I餮 4խZZUFFJB]aZ Ԯ"{RũCj7 ee+Dnmۘ4)i1vzNPK7ˌHyWHMD-Lԝ242'Svl'lӈ;!9"Pt,%ӈ{d6}!% "Ჳo4RJgL3,O\}$yFU0FN63P;02yx5vĉ)L+*TZ:S9#cK=Ԕ7\f)o뉕=IFc\\Lɻ=Jס$}Z7U޾I 6vItlpk:úi#h,J4*mLz`W b1iY$(M.iW".74]]\%pǐD?HI i0I2bLDCx)&ZM]C=>hAi4=RKHD2)Ƞ9};})wmhN6QF`dS>ߎ}v&:C˩T]KC0V-ȶxT$Zp&nF17(4Xj.R;ojaZS6dCAEu6t}|p(sbQ)99p{Q5UTD[u2:OsRpK6TGusF?y&_drL6Se7фT مdhb,ž7mcr+`ftmJY Vm˱{;dvGjAڂH-Hz8ݳc"vOy!HLxYǧ`dTIhY͹9>蒴 S1\ ya `L:j%0f r2Wqg{ {fnr,ܧk"؋1xަn?35>%c {w_L@LϠ320Nq*^ tҴVl:J(K Nov]KJ2Fyb+urb9W#!HV ryEUw-"fUS+zeҚH !тƲD5 McQo$% IHJ*R$VIJ;18BǢ뤇S{1&b? :Fu ӏuc闃vz~{M%BS$8_njRap6}ض!⺽'U0ƿvnƏoh?UmmZm\ފ NUFVijnk5vwݼHÕzw%>hctzݷlw߼~h{퇛~Ǐ$n(C{ :W0S\ 4%@0"e3ȱ)H8hjkUmte 'yFe:v1*s!f>O̱$EbN r|Ĝ gsv+ O8*6(#R9}DZNyOqZdbNVh',MV#<yϽ1(qL`Ǣꗏtz?XvLϮT*ڸ6_\\wǰwncBgw헯_~vcw?,"_۟?tn͗'vL8th7ѪPw0cüBةvO?/?}3@~?eqVAjC΃F@I^YrV͏gSD Lt$L谔jR~gݘr"$ԉq{ELc^lV`>t´D~cr1>H%rGCLD6Ы.|君A4zr>ڢ@(8f>e%hkSv!() Yjeos/@H{;tmE[گ6_m5]Nۯ)RJֵ&o {AbߐT8Q-r AĬḨ87::,RAl:ZVtMZBZ^貛i_I@r[hh$ZK'ZUo ޕsߙbwQ}%Մ-Uۉ ݘо  >&̯=G?ei}Ғ/$4W٢n%%ulj^oxRov7< >MuP|7(qK4yT9qͱmFҹ؉wL-趬#ou-~D'=D`ܣ7uNQi'k꧎ɇaOJy SA!Q5L\LH`.X`Fv𣏕;'^Kc}sf^yv^'(ceIMqݛAgTBx~YI+VwW`0cgsJD' `7YMg1ᛄ~D!M#^ p ?&L3F:,6rsv'eũRH>;Lg/f[=S~ LH0$?+z6Lz@&s޽bݱ #;+JȽ( !'FPrt#iBɢa8]4+88sd0쌎bpAfzEdCӨ )D5n~IE,ڭUf*Kjmܧ&Bx+xUְ?-weO%PG28ˬ!%F7N̿-rqo +C4%:Bw MW ]B+Td,SVd&ˉ&v4B\C+aA2bT rM$*fw\xvsa,>C>QaHi1Sv7J.X:>|/fYn7o*7QmO铓_>MV-f偭 9J;GԻ#kS{I4U!t9'Y7_c)@/p:E;0&M Ŧ#+ڻUEJrjn *'Bk?7%[?o_Vedsγ zz!p%oZ\u0T>CmMT b$i"s²fim- )~D4iY'EUiMجP򇟵%|Q`,nnK$ *!!Vm7nZ&nzkiDžrj1J"V Ll5qZVrtCBʤTF!@ElF~T,ʻ Y^f'T6+mgjy-iBE$pJi,A,N WJ! ߒ'yeLItV+z*fɞnx2^rڿqU>^}U dm\b\sAUF#le1oT(],ԌY*^,1տkX >VTتz|o 6/WmySyg\Gvfy>`;-e[d+ֈ#tߟT3уo>\@sO1P9ƖDM=-!;u g{/3Bԧ_ +mΛ 8AGi"xDCN'Hǫ*D8SB3v\ut6pA@=8.kڎu9xph%)l}tPY+[Ѧ0a}V싞 +t:(?xR2·"xHn%ƱD$Or4|I=nx$J~Z>-eQ' sQٲSyut )avLΒ3{e̊<]@\Vzt)p;(Uc@Vjf@PV)FCچ@چۄ Abq\vM`DpcQ  EJ& Y'qFcv #'i4Als$_F w_b,eL !4@dQbqbG IRAR@D$M1 Lg C R{F$LxJҪ! Dyd2}YL&F0O{@ FPBlm-7U[ֺ!CWvhw>Ker'T$2!q Rs*JE b.,΁Z b! k<Z|}D%^m+/GPpi[bdj !0jS*Qk99ԅ_'@p$8nXR`,\Rɹ4OyiFnC H يG*y;>! ٟy_jpBDUZr9-YHe%}F֡p)˴Eeԝ0 8lTu|s$ܼUOpiؿ3 \XAsȞy!57h̡ITYl$n2ϝ Ly߂aKS]o[jBpCaS%? zA9C~9kDx9[pt=t vE&͗h a}hk_Zo[7Z@aD-G F$kPsνf^sٹ~B}m6~ȏuVcdSȮxZ.i:EuG'F}Z-ah *Imi|͇1ǧ=B*k5_<]ZzKf7Hr+=2?$4v3]fn<]|c̷#>ܚ-,s׵NLEC5KAV6}_ݹ*׋qL֒ I/uP5A9$O[x9@pQBmS;RJpKMmt~77P&șb*"T6jPέ]| \uVh!r@/:AYÓ>pA4,f߈t¡7л$_jrH]CĞ Ʌ8ݥ1_dxK"*eB焞qNhy:]0]lZRNH3nlsB8hJtB!#y#ctή6n+DZZw%k.ǺLmHy۽ljY;fV=We{Editx+6i !کKQ1֍'TӋv *Oyyt.#C Nн]\!4{Ttla422?L86`XٙNN*ZUr,SW`]U|w#R9?dzeV:xLir92}(> ӂiSrDȍJ dh)ĕlKͬY.Ta/1B %Xc6_9QB%M]ChUeSRʎb07P6! ?0$y7fM}u+ǗݷOptWįe,p777hn#RJƱU" X['s/J)XȖ)b$`$?(D=Q 1A;&`Z8界4ekjJ ?35ݭQ ALLQC(౮՗#+,Q,2QD){sUv5NYaJ NR\)$$$r#c "Ha %t3KxFq~3Eyq@Yw} },ݟVV*!8x[ta%B,G_dů~?h0WĆ ߟ3jy/5M'ekѶK䕻0#Aj)7I\u]JJɳ&" 8g?1@Qb@:ΘWk H;*dfXa0:ԃv-DbG5:5"CRۘ8~/VFr_~ =J(X&̅-%upd/W1d uK j*I~;] Q@Rn.۴Vvv56?Ȏ1g[<9HQ8$yJ0Ni6k +&ԿCxϛMbD䱷f8Gpөf xErF)[!X$xZX$DStTV>#[}76Dr_%{2Jd x$.=ϫ>8OYC${iPT<ԙW&r$ýoL @ϥ8u(Gҗ;ssh88P3ѝJinZ_?f"[~-oboJlhu}][s㸱+.:t}qRsr=I^65v}JNR@6E] Eܩ_эF7>U2'SR1A"<2BZ22 f \2aJȑ!e9YnAHLY iyf )P= X) ͭ$ ^ CrB0&frҶC-Dʨ[H]cft"FffаYII?j}S0BByX}x(šŮQNOLߧ4}RxwԽo5s t:eCTԩ:[6lhf , 笥y[OS @G[zZJURU]CTԩFR9r>j9j)@VAh)@TQKZKôkbZz:ռasRënݞRKy_ZRLœ5 KspZz:>I5jYk)caZ˵Ah)caZZR(ըg ´Ah!oTs*RaZ~i)eTR&ARRC07uAܵ&hh^Ok>,g&W`~"Od͈(\_&sE9W9A.vڑX{7s  *cS*Ʌi~H*}:W\!wC#=tgfin>ySl@EF<`yP{x p1&tU{8ZP $0M1x(N!6W"8I9%Xd"L\寣]j pb< ҅TAz8(QV&indrK(!((J*`J2JRH̠`8"wEzS0S ٽִ4k#C.YpJ}2ZfLZɂp)3L t.[(@\jyA-ކm!Ej4.> oR\Y."傦Tk|["Pb~㑜ʨf|Qk$ je&(7b8e0ͨ0)*J[EESJd{*L ۈQ#ܭ`wD8$H:%\TZugM׋)0Ig\NqŤaU뭘:&Êx~S5݊ؠYIJz3TKkn9ea:HDR,,3OX#ݽpd l_^s6U5o.?[s5Jܐ!_~X,^xg%z:[|w H/#$sHhzTL[`7?/Jro:r5 d/0='+0hY]P5_oV"ͧUk?rw%߰eRNⵗ¾'!^ؤ bO1o6,$B W΄]J*g6ڈz0Aȑhݙ{LPWm+V INϚ_wgB# I⯏K;_f+`^ [K6p _5O.|_:O{84anݓ-˃;B+5ً{Gq7}JTDSzŲ\( FP-(Ӹ'X/MޗgnbE?\JPl-=NTkL8H+U!g|"1f2ds"lzx5g&LGS'Oڀ^*n/enM5gw?VUQx5f<.忽YY?x]3|sy}Wj=vyF˜gdoSA GY]zE䦲;so> Hɳ}\C4XVҗ:a<4<NѾ[L-OwePr2&n`prXٛV=(dzJSϔ$$ ^ .'dzIS"%ASI&Bev<$.pHUIՂ:@QA' h  E_-QcӞ1,tȫ{=vx! Zj:23<}{PJwk\<^5M"5@ *e' lh0&bmӸ{bح4әAQn1t /,92זenXiΰRp-\X") il&L\R -+ )] &eiZ H3@\#Xbbo&9*q¬kQ` 2SiQ.H8nXV3, 5@@n)=ED f]뮘r߄EEn:3ʦBU(ѶhhN Fs E7YӴFuw.TSv׸ʙwF׸f_bJ&BkܝPܺNA#2Ƚđ("YyE"]/en0\{AY|-%X~Et. &Kg<,) LV@` aPнUg@Ows34҇@0T"W~ЧAJ֞v\ʝ7Ǘ PegI]=5ca[wJۈYJO!=D+oAE63 ea պ2ʒ%oY˒WOn2"%7@os1ײ,?;*Kn-OW{rԬ贡@:EvsoS!(^ܫ_ؙp8409>ܗCk&sh?DHf>L"߆jAد(N^OVo+uST3ud^B?CrSZB 9ȥE1Xo_]m 3|<̽⼦o.aGonom\wc* MM,?\^,lCR=jHvl T'&vRM9,6ESFyjnQEEcBjA]|-G_:T/tmDV"?(syu'iHYy#q|pV|J~wmJMڊԘ`5BpQ"~~h+U-JTh"?`6+egS~t'?A d^We\=|gfaƑ"詋8"&8qH\F/$Fr?{u.v3qI.F .i%gD؈!mb`5&KWT]91Zj.Nd䢜#eZԳ6YN3c>fkǪf_ͧR(4BlPES-˥Ak)XPB#4|_rM~XB6Ŵb} Nx,򳭟?Zŕ3HجZZrb5!0 36"JX2S|_aSpx,Lb "`V%Z[-Bl*'ڻ,򆳇d o<`szktw{"5%;|[.mǴl<Q{=w:ͧzɸ kFv܌lt0&#J3FrmQ~Pg& GcL.­"\'XA 8Eu/SӲw-E}-wmc,|@p34A)*f?ζ60!8%[IvzвJ:CcDj%ȝUI%&2Ip5UvVSq 1ONSYJ 9OZ!Z[#dcQ$34*g ]mb6K-2C7@S˓[H~ΗgwIng_o3~GV 9&dkAk}J:K'kzz!li{*ǷIaw;{?_c!^ F"dxqyCymm'u9΍{Y0mi(͢'{ !phc^R4S&CCNH۵g2e|4咥 VjE )˩r B"PgjHS3$5[䩰\rk0',mϓG /z\3"b0/?r|ZO}w 1FZm=xp!B&r8<}q BD_nqC5DRaLb,dV),44 ïjX萰e,c&io2kV#nOTX$[Vgu 2gbnshX^D'u #6f#MEP)c4,Ax6?caRU 1)&rmpLEq|ҏBdwD0*zyQ{Hk'IKTʍ$%n.!̗(%^MK[^6~^r|'d +q)%z=ApM;$zK\ T%QGǭV".:vc4b %LܘHaHCy`c&CVH՜xe^-\?`ڙJRă0 64pB)n)CEZ0՚UDq#҈sF0^g$<6\!k CCB)e 5* +EL@؄X ;hw#ቡ J 5P!ABZ(.3XG Xl6#D%2(q0* 1_廂<c`:+m Fǒq̉DoW0E1ʵ(O"=w+)O938V|v޲BfP(ϑ$qaFiy#4bB@%]f0ZSKuPFꐯ\u]hzuuJ%Xnmi:El,cr=K l62 7$a&Эfrzoy^q <_*Y\ Dh~}uUo*) CsHKj[t BPҙUՀ\cj@~>L)n zu%S;(&}"r1IIn_sIVc ^CӃrt,шr%FR}*j}@ ٸHh#6ܳ7v8뼝}̖D2T_`k c9^HFEy硍Xz[MWP0q]ohz"mJ`j;VK^jZ{[g_XW! s?ri^QV3} }a*ҁ)XNw12U|E~ֶa849f\DK7h~d': ﲮ+* VIo*}Ӿ5SJ/maɈܭӿ7_^^w yOJnSU9gZڎn{F&<+w=0Qԧr}um U҆&G,7kvH-Nmk%]IQ⪤ސ}KyqQpMVID˿^/r2> Og+[@zȷ B68;o%\?Fܭ"p,}o9~t0:9nBjQrUnsVFz#ǔ+{:٬nU0P&r`2s_j{twwvT+WI)tj |Ѻ>,^7vtFc0 q`J 8-ovLF(ybU % ݸwq{W!Se>قVcLUxIg9C*d鐒ӫӛVY]L};9/h&ix!&o7S-D][7ZZt#嬄-)o}Vg šKjt'jԈJY<Om7[fti\;7ӸCǴO&2M.%R;U)?U4Ѫ4;&ZhVtM&C6zYh(LQKD]!:P8,:NrTeEZ5GT@M~Y_Gy^:it^F6ęlKo9ly*dT20icg 0wnIT=,L{w BDJOQ-!d]=7aLhlG`KOfh?埝l&)S@޽>JO~97(56H0IB2 )CXLC"+1 ZTaT=C%Ԑ)RHF S$4 crJFs(5@u'dgp^ܹó˝ۦ A3?=0{kf(#E7RI8?~Kn|>$im0HgD6ɣ=.DQ3/?r|ZOHʾ"#ۃǛoj̐R iJz^-O:jyYB n8͠,W1J~wZ -+TRT{]2A½ꗷW%*$w~5<T.3_᫁(ܳ89^#x33K+缒73'׳_!FG璽ŋ09IPU:$ԏhW%Jl(8I*Ҫ0`hB"aba 0ad ""Fܼ7&a*>NEl\3Q)gfX܄tܶ/$ a`l_QbW#yăΑUSZ\@iD"jemʕ/Bodb{Ll<*Y;Ac}T` 4,21.B?Gqd !B ju9) j!j"!۰|Nra~ca , waqI"ō bN4>cP2$ c+1hfjX 7/m$x-ŗ+;KUt! ]8ۋ/RQ6HB6T/$ִyo'UdЯC6}D_ ޙv{B?Zǧ#s3 # spvdAk7w0c&qW@&b^R.7Gc>$5c*9PQmt笖шm5FZɶ]jVdWKA~sZ5ٜ0TQF-ֽ"%kRɶO.Ry5XU"@X}=8`mPxIU&VqZEI鶽mVˆ[8Q,|tk"8>FXwqlE"YRzkWrGK8F> Fy}=<7,.Қ y>1˝;G,ݷT\ Tey+S 2hPEy"M_Wm|W P#[{ 529F>./#3Y^!B1f8 ul(by1aX_f;OZ$A| O^(:7O&;5*n?r6!(xo[ 2/1_VVmKt:7ΪfDJ ϳ61T> o-=+ TJFY'گ=c5b^,?gRMdY^|!߷?n5 Rr/uU0x(\/pj~07y%%; 5 Y G~seYk@X5RpZĻ|ޘZ ~rhP<fǥo_orP&+' ,1 ulW^Oּ^ι"pE*pE='y̱1?~_QZj%8'uE2kج*@sA >S0./nxr|ǙǏb/0rt;ëahJ%:tSi]o۸W|X;|? Co=(nw[upqR:W/IɶeɯFmk˒8o== 3m&giN{MZLwAtV$*1AtNo*> OfԢ{Zfm#Jm(j?mWSfQ{mV+x;tieMɪZ͡ ߣlr WO#d'(8Djן@(1{0 A[ w lce>b't91_y t1&_p!'aiDUn38p?,󁱼QN/a?vitϓ`鈞XI`NPdI([(ɮus T=a[Nu'^ B,(1f' dgH<1V.Q  -A2bZ/vL¬#ke˦lek [egHbPSE|&̵!09/{{Ž Xt8 P,X[Zͧ wߦ5aqmkf[fכ{qQ9>[KT`td s-ܽKT  <[*|Zw!o^nkk-ȊMSY?[4߼O++-!_퍶sKq5uԻ.'Q"|*LQTUi&ɺxCs(#Ѕ n}}{Eo^n#} ks`ITX5c EΜtCS׃_12"@w7jWxZ&Rֶ娮 =՜!ZX/a!a[>Tޞj @G?zESy| ޢdsl֙˳x0KѾew[5TI LK/s]3e:#{j]p{I@H쩭.kԴ}|Bxde'՞ܭDդ!\EMri*Iu*Qʺ22mܺUFwn5h3WJ:{uթFvU #nk8Z:4䙫NO:!ߤ{鉵s̨@yF{(Favc 2i"g^|1Yf?i'"ky`=b+9bZɡMLaSaGƚ gP;+0U_^y*7p elԎPKͯ6os`;K_y{'ˣ -vQ"cslo087gUE1㶏CmB^:Z~,,/=j\ ~ӭ-˻ujtE|=haxb.48 U'/_Nޤ;N=N7|ݿO}YߙFgÏצjr_?[^btS^f8xF Q.sڽPffO}SvW<{6(;f5w)D-Kd1}u}.^gnor7M$LЦCCopi+섾",΂I<}a`0I_;Eu p1 51 eRj$uY/ })Uس!Qcuw1$&ԟI)~Ktg %FT?QAx &o+jjefsbI\K#_e89WRp^#<?mzNս7cҙxUoG~}[PĂ jR{~P%/uͿ?{moUv?~{36$iǦGoSzh4ˌA>$Z_f F_-JҾ_h:5yXX ܣI(`t7nYVb/MQ2=x"WO0`?gÇ>iwM-+SQ7}sv-OoKG檋<6+)ۼgv5WiBA04|n߿ M7{Cέ_OYdv%%eC7.长jgrI\,@GU#7;wUqチ HK' 8o%_vVٻNp}fd@/-_]װtF8\& (Zlp}}]ۂݼk/q+ 4w:d27; ,% &W}0Ky%{r; : f.FQ/&`0uaC<$AH% hysT*ǁ<\2|şrqZmfd1XP ew$VbeveXgˢ, ̅|EYɟWr{gcyr^?JHfzgsr" '.z o8xR 9%(T FeY3d i/e8qLNcKEL 2=M2b*W%0WasH  BގvWZYHcO` G-GU.nt#cqhh0 Wd( ȏx@]rBI a'1)G2V)-6Z/ Hw<ܘ/uJ*n4Ėof^X?gjt~+Ɨg =BO`p?~Ҷ'Lo?B ^<ݺ')^2=sUֽ/XBL ,á~؊ (`&)D}5Lo_qw;9s!cT:*OS*:IRM x3NZJ nW@iOCCK7Q+RMA!;R䢥>kK1;-LuH51~_i)ZވZRKwRK5KO[KШ}ژbᦥj qcKO[KO8l)uie' )8TTRl%@xdT/n 7-E/ŹZZ ~ik8mIkMTT#@m(mQI>ғRN3"6S+RM(oKO[K-}bD%/_6"!;eifPo0Iun`}e)Yb n W-C͝[braj;%L։Gb$s(k8V$B(wH(G3RflvTy4!/L ,b;}4cD@IjUOZjWf+K)k0Iģ~$Ts5ON, mq8m"}"Ejkٕ-b}漒eZNY[ ErUw."m|s/$5m b #W@POǫ6f;9׵z@rZ<76L0'#Z;@!\H*WpQx$8F&>B8 H!A0B|ˣ, )^dYN{ 27T$y(s_ш@_!)1(=aC(<rH@4m(U8miK T=JG"Xa$3kM}#W{b@FCQp&s@Ql&M^BCs Z3~LSUH{j>p{XmBe٦ mZqppp&Au9+ K05G?ӡJ@6E:R*L:j;1"+)$  G;r!P?{qaͧ0@yȇD2a/*!JZp vnSnB]`7~0C`7v U&s&z!/"a2vc@T|F @ BvIB=€\>&8ĀQf^*7PD8`GzzQr;:i,'jO;J9n ??,ϊ,@kA)C<4Yv7|ac )8c!ё2kR!sa=Lp-;ގvՠV|Nwh|b:fk5zuer ݦF ;"sZW|}kpbm͎H9ApbC7B*W~<0#'PجyZX{GrAsؚj_%w>(/~9:\}b]jNNT~THQqѡHv+Sj|ӝ?Ly+}ÿ'o0wV%85#iաk%Kq5$:J>7F_a2^+}{bHAfnlg3SҨʛ)]}8 ,Mؖ-W UfA2G/qpeC+Wdϧշ֝zc'8$Nk *CA , TgزC_ՀIxrHǛO;ӫ۶DH-FmaK.kuN#bΡku9м6݋\#wcq5§PUt/{\W7/ټzٚz UH)7ꚗ#K%2U%G)_.1&4c?]U[/!KXd0rVҭ n$'& y*ZEc1SnLhݪEuQƺQ˔ӽYq֭ y*ZEGq>n2[s@V.S2퐫 @˻n[Utk2eģtk= TfF33Fl؛|53fc&" ,̍H H,}{.xq7@kcU!t#_ 7J*|pE)<v`F 7b> 5p86S RO8e0k=p B+Ddx)YXeM*DNda 眆i\5巀IX!?mJ-g+=a+KDLy}36RڔZB"lhae%R}MJjI|JIuyBAk'AZ ReF,)2 BJ.p|-Fk |-I!Hb뇻o"[v^qZ]EQן֏ iSXwv@ܪ0n$ J ;-sRv1S*3=R⻰4J8GD* ;$7F<^}P"N y13i2Tܟ^8+t6ΟUnThߧLm_~)N'Z0H} ^@y傑a;)[ 1@ 2Vi%*2^<όGͺZaieQZf*-B'*[? X]2`Spx+QE!)b/(ݸfZQE%#.%fH` F ̽5<BIAZɝ}0\3Jd2gUȁS%Bi?4 o`4&iF*CƷ1K,hV%&g:1}!kmn(e3IGҤ( oܜ3 `wK >^/wPDuIĻ봴`(4)LryPR1)݈[S 0r "x_ãȹgs݈#y`,},,lCc>!{0op"\,`l}@f:jU ||W>Sw-By?!o{3='I4=zm ݩ ׃ 8Eȵ#oÇA g VAbh8$tzp-82Ol%r^Ngy b,cQuN:h+8uDGiю)ם)  H#K'q6/n?] >PAZTշOs5enrڗڋ([&hd2EIZjZ()Sz Y0#4kn Mbb:n#"ғ[MM!G]UHx|4 ; O oAwT3q䫊nKO6; j[n2'9=v9/O\f>g t_n?^Q_xw`7ToUv={f-j)Ǘ\9)~}L4H_]|]wy?3#mf|JoʇǥsB5M]ngL<|ǂ ge.j > dR ~/DD&\h#%^A ]{ך <Πfp=p?_?08L}7l|]vՊz[\_,rJ-zx5ƱVAօ&DMNQ$jۯRq؝S 1o1z=[XDsFJ_6&QfLVRcVX9 jd9 ݂s3뫪˯j{qf_s0FOB#h>m@FRI$ɭ0qD1;;C-z:jnTJ|U\-T@c.(!ؽ1xLe L4gIY r/|1[=yeYßoY^}L{k}y`w`UIddVYJ2)=2\2%w_-lE baJ Bҝoݓڠxcswx9f͢:Սҟ^ÆSO]RHA ǰ@6>MXTq鎰ʘ"7ȇ^tO_ztkVdaˌgPdk?DT3%TC`h0<43 /[nֻHgרYmq[!-[v3ptzsQ-L28E,H` `FՖ68/Ͽomw at-a'(jˆ{&X\0AlLaTܗy6D)$p#5x6e(L ,b,2m*[2̒*$JAe 8U7&yNS`vK+hޗOcF}7K&{In#yЛ'z!>~4RG膍QҾB~NJMsyD (,TJ;/$Y(|;7lvcآ y延*fPm RNV9TN7Բ㾰T%I}XVzvXq z;E_C\Gi@5pcL'<j$ځ/8!#u.,1 wbM)Z*.s ʹsńф ҨLGFLf28׊*g-c{\T(HOhmd. DPH*TUL+A.5dIH2d:*JB&/W ξ)㷫:| ުϹ!r]|(˚aI 1S%,PJiPd` <{f i*T8J2 dBk&BTft^qJlTx'qړr;@@͕ƉmHĔQodHfV983k ZjOkuQ * q2f͌)*^Ow?xҚPڅ:('߿=W}wC \rB%%)4ҁuM.bm$_ 0Y.ktBʿ}l\0-2o_^YquTϐ_W73j|Iuzʣs~.{x7ϥ0|/~})jfu[q\L'}",gW*\|h(wEssĹv}Nyi4k}ޡ;Yc.t=jN#U wK=5rtong;N_Q8(Ugbbgs[֏FK6vYT_\oi$e'''8e]坕ꔴ<;'t yV> dBaI8TP'T'w&nljowSiP ]-rlZXJ# @1Ba1q1CR8y?E49"/˼"L`qU J*IO=,BVp-2rfmVbLN;@(YUY$#Ϥ+Uzx߷}:$L51`6c#wj,rF+;g㵝٨= )f5wveNHrimfI0(}MDŋkv>܏W045c|0 T pⅰڄRȉU:M㔠MfiPMt>򜽶XqE:+8^_}(f; wDփkq Fi[ᥓ 2SPEp!o7phbzcϒnq)>[rX3h͌fc:[us.O~d-7P:ݖ(YKdZ =>ChՐ~;~!V.Bv'pfcӆ L=u+gxq(Jg}k}QQ>u5>/.72JRzҥxq[>?r1dqkdC8R,GݗORVzyO܌WOz^~,\#td!DlQh8io˻IٸzoDA餾#ƻ8bʾ[:EwB޹&ٔ8iT'R11wxWn)̆nhޭ y&aS2Ц4e],Du^3c8!I6"3bPp quAeu4JO&  UQ/͚gFг?=j>lp.sڙޜ taa޵0q#S]Y/q^9ˏݻs. #1+R̐RO%:pn4Ւ ܶ/"|hjvax=)ˁ2 wz0ⳁث9k^a+Lj'O~p#ܗѠlb"[Jj,Jem:hMhO?sL~t &cI' XTewA3[5ٗ4Jh""(9fJ8%8 Qɍ K[6bAZ8 ]Z rU~5&kZ(x+.0Y#^2s} <`C uh~:@kt /viRĨkzbz]^\zrssfO CnݞÌ!$*VF%qXFk8!c$iMX.(qX %~FJnӊ#\MB9T}lw:6ڰafUzKDK,Gpգcq6'c=~?'3U@9`}hd1]qTޝr+=ko;OS#?iwQ6;O@)gO ;~|"yw{\H7p(37ז W@n~s N{~ZDRpj_;b*KYL )&\8cD9(?!9a3ӽZZ~\!B +lX2TOݟM"iyG-I)f^J!wQ}H5SXQK)aRJD8)%"LJ eȵ>F)*LJxSIBJljVr(LI)~Ɨ0һ>[2y@ޓzO%T0)9)$LJsǥG.2PJeC ]tJ41q')l(MPqŽ8.DbRHVΠD28g5gLȘi!qqb[*-]G;FOxu33 H3w\PuJdP݁I!bːV*/n_|"㹗R:{s;}k:Q@%gͅ8sDu֮^ζ3}71w4+ npy6i1޷2 L-wY7e gm2W9^M,6i7=F^HeTwPvfMo$3 %7V((ݼiۂ*fZdh[ɝ{?I9mi5%b[B>6E,eT MoUwr26RjED`)\YN;};!hV3A厊9~TG|+w;F"T^vMAwZް{d]Dl> qQ+k=5b8uUP~#j^*ӤZi{;n%n%Ml"wb%% `,`/-X?7a_] ]/m` 7_s [e[=i+'b}k EU^weɓɍ᫴L=œQ8)5a0h".4)5J` `iQlQZ,;!0 kEJ|(sʟ}-.EpO4#y鿄~@Պv0cMwZgdUZ*QCwFk`!p5˲]ם+Lj_}@J}낖[nB);$hhț#ϝ n$H$$+n(t߆c[%nG*'nS,8W{-Rkb,aߪQ{Ε!{UnD-^W-AfL#l/{7gx{saJܰ)*MWЧgKό pg{="㎙<^wגqalpKΦ/'-K !`jz9nr {%08?iwXNO""B$EW{9 08'c-{OZP.=]Jq1<ɆHۓ`੓Zո׃|ǖbJo6 m(4h2&yp.kTaU@g(L$j`XttgTspq:zlR~A^ѥFi]6$;XW!U,&܅5z ,=%G3X fȚ#\荮Jռ~]ms}֭} ^وa1aRoq~*$r,D18iNwLH7W $ھ&ֻJzqq @ǽ5Hnk˔%ÁCJ{jg%q e%"iq4s~IocsMLiW;DW"@_.ZĊ+KNг(Ix"Vq̬`D1*Sj:>_p_f0>w gK>Jj:J>zEk??Hol/}h-k1 Z~ɛߞxv:;`ElZ?lWfn-~6GA3gI_}U֢A8!TGt@l+dU)(nhR\jdƣdӎVn0j ]riFf6 ' '=]xk;|/ |: ? "ŝP4a_/vpwsQ-d=t#w*ߓz;Y޹zQwFm{f z:xu?g?ƺ'<+rz?T~ )ۿhpiӆIOO/5ڶ'OjM ^ fx}PS/ ?20/`4FwM6><2nIVzyG$67K"kK_0UߵK@_*Y ¿4ro- Z"=VݩNɭq+WD+^>hwۣo/@?Sn^1z~O^ssNvN+@@+5}5'qa9{Vzi,~xܴɢ7޻m_;5O#Mh? z_P.oi_zN7{~`F)i4ͫbF~m[H!}{5ո,{ j=EŗW0uL{/|Me,3cgxPҷ9ojǬJtkdڗp9X37oC0c^4 VޕY g| ~}BET57Ҏ.]9|TK[=Sj6a5u: j sN![tϻ ˜F#(V`^sdۉ9(Kݻw{Kw!a1C b$PHP)uzq< Sܐ6B?S #bjƖi4,E9jMD81M)g84Qr1 FD6lN\&Te%)C¨Trh%qje]I0G$Q@wQee0N%)%$˦wYiRWs p 昻>[@G=0b!IiN5G)}R*3E5R5 )AX,Kѯ:O]VA0>< >3npGC8k˹=87jP$ȟ%a ҵUڝ8ƭJSH˓ Cf_E#txBZ@˂c_L8sNRo ǩd5d6?jQ pL!J"G"D#!:]PTU㐓@rI _b W!ί]bV!94(J9Icʤ2OIpkJK@׮;4$rѶRNCqaY>nZ, |pJkm#bKC2@?uҋ^1 K,9$=*E.˒R.Nӈ[wṓLBp#$S*^jd6ijF~|/@)fܔ"EF|>N_cGNnq{M^_G:!9nyt}9Q?w4:_Yݞl_ϲ! T(߅aTHM4!rz uF0WQQ9AU%'@T* ɉ [g [++a]3ZtA(.3/:%pFIT^YM*m*T$4XMWݭR5 TʔvJR_=Rt˴|4$Ā j榬Sj45cGRܖI1Z=ͥEFDV5EH- c[svT%u| ֢juWM9?Ɨ{'* ^&ϫiߞG_N vUp|{+ܽsxgS1w$p]C5mWe&BJ?ᰳeq? #X."|@gNi~6?]Ł)zZůfm5Omv=,oQ驼=[Ma.21\}ʴ# Z՟\mu;g@a7X^NRQDF!/:^7n}P3!|~"Z.nv]\6XYWx{?v).N?n \vJ6|T5ݺIj>$.dvcf}gn]iP":]F3Pq[Mڭ Lv3(f }ՙ 88.Kl~~}ι?xF-[?zt‚wo퀃j&$ڋNU}H:?ފó;afߣh5/\کRON6Նm]-vs51Ĺ*Aecy _A7tH3C@`ggNLQp {M+LYe5ʹc 8ұQ*= wꝟB{Di)" !jeSkcvTG軏=*T rt Ӌ0oKNtb67‘"۴ֽB4691/Z\׎Zvi>\^gÞ\"%_tI& Q; <M:'P4,GmmHϟhDE1uZX"I 9"T +HXx6nuѥy[9y% 󓢭楾Wa9͇cawn|\i:vϵ .R18\ef.9YSܦ2稽/Ձ䆂VK$(oGou)YQK]^}$.mc8uK+ne/?O\vwOrj7޼tRR .r+yyn'j5Ym]}׊qaس_FNsP ,\ %+\j=ՠn Si&=ʋ(yv ǵdY T_jᲈJ ρ9ʼn /PL(\$_qZi,B9asqy(Vhѿɴߚ?f_#a W?mN@.gv+VxBkz& 7f Qf(3yFRuIYL>c:|K> Zs}jݲ)ZD-ԑqX+.)8Dz{!rn+hvrnsrngJ υ~*$4Vwg*[UhN\TEtԬ>妻g}])˞wX土a]2¾ֺ*HP:Xf@ Nִr,8ku!BV] AtM+4+jZ)>օi{֞spQܹQ =:79 Fнwn8hGNܣJn 9tt5k|Q\9 Ŵg_L=勗G=IO嵌]DԬn:g,_g.†G) [L~Zq-˫~fUVèrjhe O#Qp*Ddq`UN%cC:tR6ɼD$#Lj}$"oJI@4|6FIhh" 2KP,Q"haה2->$@+y,Aao; FW\g q+TT&'E.(]fgZiv6[.Z5:{7ٗ1L63qdO#AÉC^*Ԥ ]awS@h{^>'k f愎r*jE)$iW1Z再'< qs~ Y< Ζㄫ5'~V6 "=]HjY`HvyHGmpQhc5^^eee.SWM'3刲e# XkTp ϷdGJ#)r_8n'K?8Q!a>ekDjHD+@xo,{|k@C)M@f'ED**U9Zq,u,p{eD|;F5&TTS]R,Lk h$ ΗUH!ۅ+Iε@\.~篓 q,^-G.&(dѤ:G@#aRHapy߿Z[`3`< #i`}3 sa;m+m0Dϯ_lh $ndutE"A; n& rx_#sOnqd+yw'Jꐧ2l FSMB#ZAvC,쒐f4o:.c@2$JhEMqb5$ 6<9|dg|md\ݍˢfHU:%M=a< ̣%{°Ű]ThYvEƒUZ2΃q2^V@#s,קrp(MJd?pQ6?<\t.+ߧ oFR*V9 3pJx"7 /6{mJ-;^dBsLYr[ -DL)[e TjwJ::Brb06" "`%^2fn>*Ұ?٫NRGFF۩˥tm$(bB<$wtԊ$~ۮ3 gFbTބ1NV[q1loJ)m">A BH4 c q*2BZ1X%c*?UtX{ zl,\?Yvܢ0"J$f}|tߍ yhtTO~%* B0_'dƈuͺ'7JY-(n`h!)> p멧k-j\{. `鮸PE62#bAY$:1k+()cY& 8ShyОrDGB'c4#o()Ѽ!Y &MePyp롮b嶺Ks!7 >B=\<NC&Fc=c7|͗ A 8~q^o\CY[-ȇtF׆J]=*8J8P%Yid"T+<޾->G7)> }0niCgtN[s 슎YI=*x1G^|0h-L/xE:>|+6#p$ma#'BV$U9j0UJ J _4O}}@;2e|9DЖ]Y@[V6Tu/=5۳qˆ縡}o[9`nr}['*pUb 1Q,$A$M5:ョ259E+9n]'_bJ$q3r .WK,+{Fn,OP&z n͠I:f ˒}|mIrlIi*w ;q2t!-T!+CPI\)nDD>1>·鶭|y?_\^?-"sKG$2W.AY/&FOZyu+n B>NI]_kw̙$BF+L,qk ΋2r?h1]4)ϓy? .c=jPhIF L͆R(ls3W/]pj^hjCC"DR zfM]ke~R6n~f)+g}%OOK(-@.Q' Gn`yٜS \7D di-X: ONr>!:WhW?Ms?ˮoXL7l->bvdS~%g$[r ֬ȭD}L)Pvn8(Q;?q4Q%n#uī 3.g"=e5xohXN^Ϡe5.ŲiW(AҞ{4C#kB#ixA]kT^s~t=.e`#pg''Nu)ezQih4 P LQA>9l~E]uϔ4e郋㿡~.W@`fz8E)|/{+dpغ %)-=PJ) Cٻ^wfY5&wȟ2i-6[&P`(,uAbi@!@e5@UjRhERHuʂ)TɀN<KRP&,A}gNUt>zkuZSt]`낥),L5CMV*zۀں))+ădaJkyuZt5*P5ORw`>:%Qp4QIj NxgxiRhٞ@$C3.ggv) J|m߇hi㎤ )^5&xۢ_ eRu= sQ֘@ ܫ($$|8OvgXj^] &-ɁK/yKYWQ=qVdnˣ[/x6sx~h+t&L$j;5$Ljfh@X !G`*׬2C3AY 1v16X*a(Fe@'+ ^Q B@qlUo"MNG?6;F/IDRnc } 2`w?Ci;mT +"ѳ/@3B,4<B7tBΧ;{gxk-ƃ>R.ÿ 8kx=ݑޔ;3 h^x#*CI>dFh )edDR* .ޖ;5&e}{j.>OBSQ*6 L&sJ-,&x;~vVEhڋ?$ dKFLlv |^ā26½>\ˁ rF!xl.jAr ֡/X.9juuCA ˨[Vd-D-ebɳH 37/ 1Qڥ59`%ӏoJ  PĆHr?ȡ+؅f,.zw˺c9D{O\BT("-r>N%2ui,Log/?_U]%xUWL''j}xB=-}_l1jS$x1,]7 ~sp[[0?(tpFR*٪q0:` mwǪדU*[M¹<$YGI3%^NCR)2]kQRrgQeb#WR g_^N\;9)F$Qt?(VdۡYIY)0nyvߜCKQe;8< ^pSڬ>p ^2~stԺa%d#ԃruaVK_6C4gҮmatG뵋m] j`5j_>WbhރfrcT >MOƟF뻨t$Q$nfkj^}?ʻ﮾w~8Vo~?plC-4oo?;ocEo)E e`=T?Fp)iKٕfiձ-`4WO$=Rѿ!DQ!/w`,xC$,.c̣/տ߹! tݣAoPЈ'kK6敐ޅlC  n]s&4w;uaw{gůo߯RbB5gZ۸2/w]Px?TJٺeW50m\~CCf8Çd#R3~ݍF7n\eQ3B GKZ;a+,F۔1L$ hqzVd ]e0F No_ n0y&і)) >.Y&,hlYl¸ Lh)njˑ>W]X K^1g\8Ā0@yW/¤YDWKEL uPǠݚҠ4v;yyڭyҊڭ y"HJP⼫ݰCh4":MEPn޴[[Et{=AyvǃQLc7ϮV "c\dd{<0"jbg)q?tAϒtrw:GMG c7Ӹ ƴ8k B?@~l$d! Nl'WG uJ:;՛qgt F70q5~t,J yb(; |}Ԍ+_˃< f`RMTSV2lBjExjN:T[cg}(57 {;Ъ(BYiV]"8PZٵUښ\qrLaVICe lc75[6 kƶo\kQ3f>j¨Q Nƫq5GmsΨ0ҔUQ'pCi)X@Xڐ.MdXWyQz iP)ag}Oׂpm Scq< ֔UDiu!MhUֆpZ%Jem=?/[{ey\xYYvxهF}>jP5^IˊR*D !y{Wa7E_ơ@Zi&Mn"ͬ2p9Y̠Xj-'V715m Ҁ4d)~K ! wZJ#q`9Sd`#/l4]}&7ߛ"WBRƟMMm_PQsks[Tk$tK%nXn ({HawN75u;m񹎖Ѯ_E+Fݾ"F )b~rZlUBӃ PA,ՠ<2ON77~E3aZkw +})L`u$iav3}oʙuA\ӵW+-XfHpXy4F72,i;?%^R͞L;v vP5}n(#P )ShSpE?bꊨtcՑ,ƕ%8ٚF<â ]ˋY|tgn\˭"Z@͵x-kHAD<6c Lx^y﻾KEp{UqN3$ 8%IMCSk!Ze@&"B99GZߘ^ sƊG)b7vEY)-9^?57K ZGVj' + +/P2LqҡLshWB y4kF8FUk i;Dby- #B pp)#4 T9\R?|0/+ͬ׎8t΀"d((o'Ĝ M Ƽ,e=qB \yEu?%:!(!L>uen&#&-ЂSleuBp(!XȃxάTIE<` e(a㞸a_V1q>G@aACڶB!" }) !41'N#U(:yaɟRSiC5u<2UƜp'r-Zm*xB$tS~J5)hjΑ'3sBRSmH~dWI}IesX|$E{r*ۊx\qߖ>4{kk_ Z!0'G֊|4E-HJ^̍~) ;Z*^m2ci7Rj+= Joyϭ M-5Eo&xel(K2<+ ¤1&, yU'2D9y\䄳V|`[dN``BDY?{\z7)ɸK$3*dA+3hcV`lKB 2_`,W@z`iEfTk;*6q>'duEXnD\/^7? Y.܇E&Lẁg+#?VcLru\P$ d>#Uu`9ࢻLzEN?']-oN6?UcGE%?w >&`:N'oN&ɧdxvzJbh-ieeècH'Ԙ&_0Qȧ`H9D&얼nj` x_cN7'  !zm-[O{YF>X ^xaīZD&RWߑʇ̔U |M"y*xV^mˉ,+, p?A9-1ēi{+ jsZ2o?ZWO"1mA>Yn5 i؛lEÝcs 4٬9{+ u׋١Jμ2ϖÝ:w-SM#q!nmX݅Xa[H1Fb UCa&KkR-Ҝ+&A9MZܾ)jJ!x(a2&),0o  Lh$@x\gLD Ub&;ڨX잒tPթRygLˋ|m%ӲA*zN(~<E@0*,C^kÔ{vgY۲l@yS*%4˿ynmY}Xv<^b%Q-fuf,)lǒ=CYj;,Yb4bddRrҦ%l3M% 94SJ'vf" 5EQq"Zv[gV7΁a4emg8ۏDENKU*+vu2 ݣv%5?߯6Xbk=s`bsF=;7$n2ڏ QKvfO'K}$ikr}PNwpwGF!(\$0d=T %rTjB.>lUObUUz!tee qba}̖;?uP\9sTtu.h2eA,|5f5\EFF i` c{(EyN,Iav^[p>1WԐb)pUzSKPu ؖ҅hkcfu+P H)Eb*ҴvK\茺Z@4؛L|F'֙2,VaH1oQ~W"cQI#Ǩprif1u’;4ՄRQicu`YPX́Q,,nDCa(/G˘o! OyxΒp@(Hion྆"2ok4i:HI3`52 A%<1d%8}.|NE 4`0G wLzA2-JY:+Q.{V߫r@'r[!RF?'<3t)s>S 3ZɝBc̆\V\&4T>-0Y&4~]h87͊s$_? [Q^)_!#IA,;ȈA*]x; ş~^!XJ)*[| x2i`bpvroRcFҳ Ќ ^oOъQ:Is6lS{3F0HhIrZ9y762)+\m6WBMW x zW,Ӕq>,TŪ [7Y0x>[{/z~&]v]HNƂktgyE);je,l4:ÏliįZb5W 3Ec[b;|ko{o:rZ:C&9,Gow1sgꑠ\3-yl{Ife$b^|&VsxƷkwbrDى@$\ ?a6n_=jI\́I~#$ƥ#myGHZ#~l9Yd`G-(tF>? 1esV4ft'pO.wI{=Ϯdn#h݇Y>&b7ޅ㿷}:Ԅٵȶr+޻(oCbJ9;?{F#_vp-Y,/;pss]~ࣙx}0Z/R`$b~ w5P ͽ@8BFwwhqx|v(ըYg}/8A x\Vi4oY@MebC|OXyz/D2AOli|6 d+i&fJ-doV9O\eV2X ({^ҝGZH0wTQbj>Ak[@k (`!:u&p84RVtgOc6j~;}w>ܷuYە,?:H(jaә,jອ_hrݞ){M\[7тk=.Kjj-qvL''^]p3ڃ$4η>%[j֚ 1E3VxO[θ-qy,h&t"J;f_4i{,-aP6ݡfhj>;X޴I.$?*eOԜ42y[BaKLEK~r#Kq"eeb~&p6z&[iٙd41'KIH8)>^&pO KJHkȰ7R5JT=Tn{ۡwH_n3 ]F  02CETkFq}6sբ:5:^A1Y"77y-r+Oꇇ^=ܻ!A}fu}|%w7_Ǜ2Hun~__z:Qt:ǣ:]g݅ g}Ny&_`=C=# 9XB㙰|m1 HȔ q6 ^d:g_lBqM YbvG;%lRi 2ig؆mZ )#6.5"P/۴_R6WeE;CS}L5`,A~g:8I՜W)t1>~{hzͫͼY*{']|`Ÿ۟BH6~[p!? N,^,d[z9$ͫᖾU *#ʧjjg+=m+뇚M$'V6T7nN4]7Q*J̳՜54n]7hJZoTv}pZAÇ:Ij(x~Ae 늌;:jB{/&x~U >'ƖA ;Č؟UH>N&C̈bWfX_ %!WX@Bz> Di#יC/Z4}ݟ*PթvD4t1l̯L':#hw6]Ej=tT6 jhfΨRCtT/*?ÖrmNmo6ꎙo֦V+.{\uH(}e@J8*KwV( YOwW׸H0z"vQI]n{fih[SK(l$'H#Q Հ"͞BÞejMHBs  ['4xDrqTKRkR;bRc :T51a0kCޥnn֌9_ĴkG Q{ B$XbuFNM] ThӖ*B)wuQN(Nk/EEƩB",% YL85R w kA9E#4GNy"N)\q q:[wAj׳ͿY corJ irDaz͆*}RBdB{ˬ0H;UdTVO?]% 8[kf`oxiwwӝa@S˺B)T}U%6UtD)P#N EmM%K+pateI!lqCƔ\Z#3aQҸ Ռ SRI&w Kᴋ+])=NCj9[Tȁhfс) elyujMa$迬n_xMSdpddR ƫd֕K42q4êiT[Pp Qsqi#7K5)kμiS_/\m&d_ ӳQMqSSQmMp~v3db~s:Fn\it.I4c藳Քp%rN_N2AtR41:!GjgQE`98 ΍VAr\Z8C`Bm"CEk1z* `2$-n4=<S'!kpDKu8 FUj^>S< cVj $lpӟno8[нբOxBM1QqzZiU啠1>^iE<{=a?]\x^GY[p%7%q+N@;}T4!uAy㉓^I`;ӦZeR (SyNiF5uF9m3 &uq;_xvY&J]]ζ{6%q5(ͪ!Rew˸ELao2Zjc"wXUKw5Bp6xߑm0:H@ɶ=d16"H,RiY7foݖ9{fqwL=w~08#$b|܀,W:#f|b==e9fj'$yO{pbtʥ8zW+Tc+9 s!_<ħÞiAlư: =!"#4W_WBqB,,=\>"y D96JѬWxŒC!@s*z7;߽(5tP{͏DhNSi~1jMuaUj ]kgk$'r^ܵ)uu)?U =R˖dح !1`Ш!;Dc/c΀j>p덳W١A5N&%daNrBNs.Gʵ g H-A8X|6jCj `%] 2U2QT~4Zei48+m7A)Ƙ "Xhn($\R&h$&SC |S26VB+c>΃tB<GphXV@fGRkV*([fYFyx kF%QqU4?GWz2z}P͛L}U8U GlΫȍƺ1 "=Mm%9OPRP¼o~xqey'sB{>A- -wsI]79;A=(Ix0]5{HGŢn/M~_L0VSun\;- X͇dص<8]A 9fcӡ)_EWX7.vu4\ Gq9Xg_Cg||=ʁ{QKwPdspg3TSq||||Tr9D8bqP r΍PkQ; G朢TPFV0BqxחmJ4qș P Mс2$xIo<0:-&aGQJx5[βDegYvvew(''Usd\S>SO+! `=ퟄ3׮2 9!rIF+80oWQ"rh{:(P zrlemmYG)(Q|>Zdׅr |Jev%|趻*!5R{Tbuq0sW6 :]VF.]H#o~o+y†R Yq&ije"q@nMq"=, fgXvT mNJɳR`P2gjz,fZ) !T뾊/M՗g~TO˙^ieCeqȎB3];~M{lrPV"CCg͍Yļ"WG+>^AbRMEf*nhmYM֤bȸ4נ}nV#5PTd#*řަ1zU l>? 8VeLeL|jbȳf4#(ox)ӀyY)P.>{ɣ@.>B8€<E8 إ8ǦZnF$r; 00X ^h(%-H$5JKEclƖdDʸ0"B+h"\)46AS Bx-8kCkt5:)+RR hhL0bDZ*yCxNId 0}'}*{I'VnuG'QJ QIs+d.%eRt+AU*3^n,2nA]aSbѡ01LFS If3igt@VX BXSQ Q~N]fS\M/D_x9}!]њ,|B v/zKV.~^1-wrMMߏ ?/ߗn;!NqjS*"׮$Yԭ/ P$^ҭ1Mpńuy1U'RPT?i4p)\nAһ/=ЩB-./_?N{=QK 8m6HByrZd&6p4mô *"efxci4##z{)hB$jHWl8# h}Ѣ`8=hcfx&I$K$92s2[BDmqf \1M̗!Xc.o؏YaP^=M$k\[/]Ͷq2V) is~nWAu˚qB㔦lQ8G4sWxD33/,׫H\Z3pXڢ=O=c+BDBkID-;w`l<6 ]%N $ dAhY4J1@hj8!7z8 h(EA:J=tI{&P@r%KhU95? >Uex%u}ܦn9 +Ա1 1SB=̉fI^Sͷ-h|:i:ek_Mlxي˨V̀nRE[ Uf++?cɲ[l2.h3,'? x5a. 'X ͗XY^k/YQ.:]z癖VJż~(t;"yI$z&Jݔ<>p[!hm,bExpe $b4}D 9eB0O)yhJ7w8<}Wϛ{_1jP 6Soh,L< OYߔ &gJGghX|wYNfi\m_qrؕ|طG_G~?In|zejF@f>xeB:JJrd a=Y,X!T'8}yq^kB\Ieiz38|- ?7OG-? yp)P:71CV&EU͙k1"  Mc"q*4=EXCu¹ 'ٗCbSHE@eTӳ˱(x޵ iu/5V!|veDQ dAKDHYdl%MN,!\ z Kd/(gg$s!=!_1)YvǩA3$y 9u̗z_g"okdzX̯_\]߫9lF*{W:j50eYy,@ va$Ŀ?VR $/;/,F=:G-c W2Ï$}o?\{1xblY^Xs7V.gWy9^Cm6ld|Yx]׼עg._|ӈpW̘.zzvoE?@?<5F9Qndm,KVǻFG(n&1~~Cozk9yttV?8}wz[f } 6}Xgw nog{9ݻxj6>wOVC,:4ZH7); 6Xgw noE'nn8΢/)PP_G5vhڐl\"aٹR'$1*).6J eF, GJ2 twX(朢ElbHVRX HQ3Rym~ VM:P;\[:)sԌ@dXD(&mnGh`NpM*;I?3wݬ E^.^$4M-ǒ!ZX)430Ws&jkA\uN%W&(*I.iKIJRJp$kMh\%89M?.Ayѣp-[,0csd,}VQ2XjY!7mpV,5]ʪvglKeV4ÆkU%LXqFdYP,DB9 [Zc0^9/V]3WZ]Sm,iBry(e):5' #<ou<2W&e)sHtllߵ&h܁.u,1誒OH'tn6ݾ[ώD lݕJW.?[9z]|boc%{"<iJ%_́H?%\l>hAXB+ l$lgHa:H'K?H'\/QZkL]d=W,Uب-npee*P\D JR K%4לyt^pkQ:jɔs}k~\ypsH?/'~\h?.e1dMUKB2/ rk`DdbI4d,d"cAnBkp.!{ndb#YJ80?RAp+k~d x2h)9ɠ8WId&C-]FSBq ?d=ќLr#4c񫒌V ?ڔd09_-HCӴy5', ShCZ൵rVYzV[@W(FJK58  RFބf=;,r=۪g#[#18my>-&QGd$|"D 4#b (XeK1I'mHW?LSVwC`kLlO ziQLRAVREQ<$+bKں*Nq q]*[cpSkFwA}> IQc9\JH?#Ko|DcH S6h9Q%#`S aٚ[nU ,`yp69멌2D&H0nh߸F8#k`j[##GXɧՠ>((E 4}O珨|A8$'L.z1qd E"s%K WYV" QzՓgT{h&R =βɾVi=-p\m-_>D A!דdep9#VG@iS zok} R%qxqOL$J9AoBE$ \ =!MQBn/ڈ&[[ . E Rs)Z }Pv5_"/$Vg=Q\iԈiJ$ &Q;̄=a^W&3-uv ag|Dr#hf^~0B0zySn۳v-nuoA q1cϠ Uo(F@x dK{ۓ L-:IQYIN?jD1]Y:t}$%G{.[~ZL0`s37x^uQNb$9(s祶-V[$!⶿8{@u\K#N=̗PK <ɒxhRnA+#Gx3uwVS~A/1×3w{<..{3rT+צo^.W ­=YM F9/P@\.RUٱoIǍK=Y5uoHgw<1VθC}Hz8;zboH '@êk+B\eJ(t9>lO)mo}ڀ9x >-r3XK ei_8O#g+N<6X] #mu$lZN@Ql=Ny3x )ǫ&)+Z]Vuv1D ~lR-k{oĜޟ>Ta3 YR(%~Xj[{Ť4Au$o;IG.@5tF.*]tƗ()M~i3|xhoM6 b q.g }xA{@x^wgTLXEw:0<;:;كW*S֞hl()iG) 㝶!QSA{;nԻx^^cW2֮ؑ|M}/ںoUtǟz#цn?ioGzYfgQrR}No='3 p_.Nr T~sRIݕ(EXū'ו>Kֳ]tM{Fh9Έ9Uޔ +rtVbF),*rck^SwѢ 74aPRFɧ^uKEJctW)W^-s _wmw3@C +%K/>~<9jS0U⥐$R`a,]"Ԇ^AJ3=6N"W;73mLمϞ@9-H(H!YP520v.MqJvMnnxu85!!%~\0e-fmhtmﺝMw]t2A?:~?ق[Vպ%nmBi?ȹş97T(GQm,ibqw·2X}>1E_q@'NֈvoU'`"%RKADgg m ],KG~@Z頌@=xv, Ks’\ QrrJCFxnT+>^C\> А+) 7[* Wöe}5cFqaٚZ׆Դ.Yj@exj'Jڡ\;y4<;YVy0UtQt3tADuRlml]׶4#*T/}xq`TպS dX4| QM.1PSFזJ-T{=_r&5VNS[o\D[T)ȡZyߣ~ڭ- NghAknjMH7.dJ8i7߰أvkK&h=Fj7jMH7.k˔f MgM Gc3%J%HN@4z6Q-YUs%,u;#uOo!zL>;e5vQ(b4$db6Y$Vi0hZO}Syv'ܭx yVF6\>ץՖsM RgNȟ{W1#jmg4pE+.Eg8P^v생Vbپ?=znp4{¥8es[=/Þ0bgPnBJA}<eRPJˀ(vM]*|4:L*BLh VX͝SY[K'S&.e3SFb]'(O 9pTA`D=&k?nbl1Vm~ͣx`<ڍp'" Q+*ọgmϿb9ߢt9^VLg]a5l˭_lJFA d@rFtɆT]j(mq8y4"fLj{'eK q;Yɩ0&> | t aD=vR-XVUVW~ӾZ'Ę@NP5DsɇthEk_Np@qaeQ࣡V׆aTzLFG8e-4jk+ƯOUė~ps( Kʰ2bJV}{0YA)u`۱é |S`A0 [ב=Zjb=~ bzM{#O;:L=~ y&dSx7J tRƻ.::U үEz.,䅛hM!G» tRƻpA:j_ߧwKXݺn۔7N9ޙO7-ɾ=Z򄿅@dz5jղD8Ӿ2fr/ۤRe.r g8Yȅ98h~XsF*4;&k^˄a$JIY.Uw}ֿծjgZy]irY:4̵ʄL%-\y@f0P1eav N'޲sg3rvBDiKڹ /|3m(ݔbO'Kce'-vLgxy%E\9d͌X[}%%ɅE1tZ\tZ@HBiC_}>eퟌJ^Pe(j%ƍu³ jZ}ymXJKqa*pԡ}Ƚq 3J܉Y^|j% R~PPcF3u悸 BTPݴfg>5P%٫E9\+8TYH lJCVɲ*h)%gyUQdKt(?O'g*#)T`a;H?* MhlA[2٫ "?{A:Rz g.GWxYpvRz[?k˲>d :+**NVF2};ʯ4]eҹղд }) *Zڀ:oHnXsۭ6^nDnRoNTwLj0YV J<֖)ί׎RmRKтzARp!^v%j&{Cot{&9Ҋ+U& Yd. ^kͰ*/ʽfVGo|Yg5a9]dfYs޴- LCg#˶X{aVp}ӖbX5ջȺжu?$Fo:z"89#g_uyeݝSo6n>AmJ[#}emH=?rt>@O*A:|-.}?ǿQ>}tu=D,쪤~y*ziJڎ&R;9Z֝4) V@_kKѠ,Vٲcnr,&3 }z?]WoXk@58b(4~zߦULǘ;$jˊLU&@E9T"ԥu+̝fYPvb@9Xk׀zW|ԲlmķxP(yy+J0T+0^!Mԃn5:?׊9v5Ill-jMsWGIFkA 5Ӎ^׀ζ6`'hk0i:tXV*= *,+KZJr.Y Xp+?NCNQk3%]ɢJkO;gBB9J:t7XάEE{=vL~Gi7Fq3anJ+}nd{+Lh;k8WsJۼ %(]hՎ E Xw78"e Z6VMÚ_ӍD JgT<|94~ޮRayo~l9Eˋ u<'3d}A:OVJQXk, \ɡҮxÂxh+:lܫY-g8<{/ȭD9&nlW5c4Wpaj'Jvl?X|iYF2(nj19^AJ#||{r5SMO57J5^o^D2kz![S 9r:t-c9-B{!xGx5O}@hJq0r"Z%d]cDG^bLK;mu|ʱ r-+i.kv1Dsoy.WzLFD5EHkZR )*?C s k|f;|gn辻\00pM C)x.;0oy\$/n:_6c%s`B}9zhB$\Q}R2&af-^/[c#]PcnáσhsZ?Ƃns1*+)BVxX qH㓟(}fS2һJ6V PN*ݷ!k-6VtMr?&:k  VGn>._9oAIkv|0qiӿZo7aaV/e#? m~ngUnmN( ÷G_Qh$ky3zX;ϓӯ8N_YЅ2\q~^@lJAApȿE=TĻPH9\YAI0(Zl:wҠT?8qb26ı$Zs…]c 2&H1(Y/$u[a梷̉Jjgۗڦ~BѵX?DZS<~EV{ͭ_N.Zno\!$]ގz=R7sz !x,wdr[z7wY|%偡p]Q 4ƟNE <$G)0񗛤EdxkZw|lMw/RpԸgS9r)B^&ٔT_w޽v *I}Gv+ˣyޭ y&dSJ >nȔwKA餾w;.V޷wKoݺn[6yc_/ w1U/ĝ5?v$ Ha>[׺DVvGVv2(=Uvcd=EG|;!Aq~H9^17+(%dI[ ȄEoX>ZbjًӺ/:1{rW}*@n8m: 9x"s}WK4'FblK:Rz 4tӨW\SV?n9[Jw\2SZl{"7 5mK 8:+H`h3YHxQRa F 10319ms (10:02:29.322) Feb 03 10:02:29 crc kubenswrapper[5010]: Trace[686246960]: [10.319145632s] [10.319145632s] END Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.322173 5010 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.323866 5010 trace.go:236] Trace[1504449500]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 10:02:19.185) (total time: 10138ms): Feb 03 10:02:29 crc kubenswrapper[5010]: Trace[1504449500]: ---"Objects listed" error: 10138ms (10:02:29.323) Feb 03 10:02:29 crc kubenswrapper[5010]: Trace[1504449500]: [10.138584187s] [10.138584187s] END Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.324035 5010 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.325101 5010 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.325390 5010 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.325448 5010 trace.go:236] Trace[1215420072]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 10:02:14.586) (total time: 14739ms): Feb 03 10:02:29 crc kubenswrapper[5010]: Trace[1215420072]: ---"Objects listed" error: 14738ms (10:02:29.325) Feb 03 10:02:29 crc kubenswrapper[5010]: Trace[1215420072]: [14.739022883s] [14.739022883s] END Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.325463 5010 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.329021 5010 trace.go:236] Trace[892539971]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 10:02:16.735) (total time: 12593ms): Feb 03 10:02:29 crc kubenswrapper[5010]: Trace[892539971]: ---"Objects listed" error: 12593ms (10:02:29.328) Feb 03 10:02:29 crc kubenswrapper[5010]: Trace[892539971]: [12.593666934s] [12.593666934s] END Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.329054 5010 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.334372 5010 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.362038 5010 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33026->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.362089 5010 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33042->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.362106 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33026->192.168.126.11:17697: read: connection reset by peer" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.362148 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33042->192.168.126.11:17697: read: connection reset by peer" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.362560 5010 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.362601 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.362811 5010 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.362829 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.427109 5010 apiserver.go:52] "Watching apiserver" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.430277 5010 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.430578 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.430929 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.430974 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.431013 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.431157 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.431262 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.431272 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.431309 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.431532 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.431572 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.433625 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.434108 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.434260 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.434268 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.434389 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.434692 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.434937 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.435168 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.435352 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.439488 5010 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.448657 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 15:01:35.855834024 +0000 UTC Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.458249 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.470812 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.482183 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.494433 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.505534 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.515499 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.525078 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526472 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526508 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526529 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526551 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526573 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526594 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526613 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526637 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526658 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526675 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526724 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526741 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526749 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526760 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526811 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526832 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526848 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526865 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526882 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526899 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526915 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526944 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526962 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526979 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526974 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.527010 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526972 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526997 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.527532 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.527585 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.527581 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.527750 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.527773 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.528272 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.528326 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.528350 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.528411 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.528530 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.528653 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.528901 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.528929 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.528973 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.528985 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.529117 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.529179 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.529173 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530672 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530714 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530739 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530760 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530781 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530803 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530825 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530848 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530872 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530892 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530911 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530933 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530954 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530971 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530992 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.531001 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.531012 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.531093 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.531134 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.531233 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.531593 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.531293 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.531424 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.531651 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.531679 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.531758 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.532582 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.532610 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.532844 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.532865 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533077 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533189 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533242 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533265 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533290 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533317 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533323 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533386 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533467 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533455 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533564 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533380 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533930 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.534167 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.534187 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.534295 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.534384 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.534394 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.534550 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.534619 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.534667 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533930 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.535069 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.534960 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.535149 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536162 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536193 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536226 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536297 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536316 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536332 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536347 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536362 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536378 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536393 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536410 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536427 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536443 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536460 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536476 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536436 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536494 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536511 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536527 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536566 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536582 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536597 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536614 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536631 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536648 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536664 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536683 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536699 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536716 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536732 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536752 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536767 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536782 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536799 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536814 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536830 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.535956 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.537247 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536161 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536404 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536450 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536462 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.537300 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536476 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536649 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536695 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536949 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536999 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.537134 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:02:30.037108389 +0000 UTC m=+20.193084558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.537172 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.537353 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.537664 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.537720 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.537835 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.538071 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.538133 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.538101 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.538303 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.538489 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539018 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539190 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.538607 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539028 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.538966 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539265 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539070 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539091 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539447 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539166 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539272 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539326 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539582 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539613 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539639 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539661 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539684 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539706 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539741 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539764 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539782 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539809 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539830 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539851 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539874 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539894 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539916 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539937 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539978 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540047 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540074 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540096 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540117 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540139 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540162 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540186 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540227 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540251 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540275 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540298 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540323 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540348 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540376 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540394 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540412 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540399 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540455 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540474 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540520 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540540 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540557 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540576 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540594 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540610 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540628 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540628 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540604 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540644 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540707 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540794 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540818 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540851 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540879 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540901 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540898 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540919 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540926 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540971 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541079 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541106 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541108 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541130 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541155 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541179 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541229 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541254 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541276 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541302 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541310 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541162 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541330 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541362 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541380 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541387 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541398 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541415 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541431 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541448 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541467 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541482 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541519 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541538 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541553 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541568 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541584 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541602 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541619 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541635 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541653 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541667 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541684 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541700 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541715 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541732 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541749 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541764 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541782 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541798 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541814 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541830 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541846 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541862 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542077 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542094 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542110 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542127 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542143 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542159 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542176 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542192 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542227 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542245 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542261 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542278 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542327 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542477 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542885 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542858 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542913 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542924 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542950 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542966 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543000 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543030 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543090 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543165 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543251 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543284 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543340 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543368 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543420 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543450 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543535 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543552 5010 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543593 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543609 5010 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543621 5010 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543635 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543692 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543707 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543745 5010 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543761 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543777 5010 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543791 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543831 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543847 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543862 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543875 5010 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543912 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543925 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543939 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543952 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543990 5010 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.544038 5010 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.544078 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.544094 5010 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.544108 5010 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.544123 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545345 5010 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545390 5010 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545412 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545425 5010 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545436 5010 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545459 5010 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545470 5010 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545487 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545499 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545510 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545522 5010 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545533 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545550 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545564 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545575 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545585 5010 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545596 5010 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545612 5010 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545622 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545635 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545648 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545661 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545674 5010 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545686 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545698 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545711 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545725 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545738 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545750 5010 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545763 5010 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545776 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545788 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545801 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545813 5010 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545825 5010 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545838 5010 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545853 5010 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545868 5010 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545883 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545944 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545957 5010 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545970 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545982 5010 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545997 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546010 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546023 5010 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546035 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546049 5010 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546060 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546073 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546085 5010 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546099 5010 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546113 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546125 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546138 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546151 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546163 5010 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546176 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546190 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546202 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546230 5010 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546250 5010 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546261 5010 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546273 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546285 5010 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546304 5010 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.544498 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546670 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543000 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543011 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543277 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543306 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543728 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543827 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543971 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.544094 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.544264 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.544349 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.544466 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.544612 5010 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.547552 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:30.047531393 +0000 UTC m=+20.203507522 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.544790 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.544955 5010 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.547808 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.548270 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.547608 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:30.047599645 +0000 UTC m=+20.203575774 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545266 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.548574 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545555 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546503 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546584 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546653 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546922 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.547112 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.549704 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.549778 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.549957 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.549972 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.550032 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.550131 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.550195 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.550299 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.550550 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.550793 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.550950 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.551297 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.551306 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.551494 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.551696 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.551717 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.551797 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.551895 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.552171 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.552256 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.552343 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.552449 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.552658 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.553336 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.559971 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.560025 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.560036 5010 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.560096 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:30.060076182 +0000 UTC m=+20.216052371 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.560628 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.562177 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.562636 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.562707 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.562717 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.563385 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.563602 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.563682 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.563835 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.564140 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.564347 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.564599 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.564762 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.565147 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.565157 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.565411 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.565475 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.565702 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.565765 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.565998 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.566354 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.566124 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.566173 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.566185 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.566873 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.566908 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.567736 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.568040 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.568285 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.568585 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.569737 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.569887 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.569893 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.569922 5010 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.569931 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.569836 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.569978 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:30.069954183 +0000 UTC m=+20.225930312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.569985 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.570026 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.570242 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.570419 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.570519 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.570631 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.570756 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.570752 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.570917 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.571001 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.571907 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.572577 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.572604 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.572631 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.572678 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.574663 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.576912 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.581310 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.581348 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.581984 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.582113 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.582229 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.583406 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.584382 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.586470 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.592677 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.592732 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.597929 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.604577 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.609897 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.612767 5010 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b" exitCode=255 Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.612809 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b"} Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.618612 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.624067 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.624298 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.625546 5010 scope.go:117] "RemoveContainer" containerID="8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.635648 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.645287 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647555 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647622 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647699 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647717 5010 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647739 5010 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647767 5010 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647766 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647779 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647852 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647865 5010 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647875 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647890 5010 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647931 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647968 5010 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648005 5010 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648015 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648026 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647931 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648042 5010 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648074 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648088 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648099 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648110 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648122 5010 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648132 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648142 5010 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648154 5010 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648164 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648174 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648184 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648196 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648206 5010 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648232 5010 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648250 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648261 5010 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648279 5010 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648290 5010 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648301 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648312 5010 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648323 5010 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648334 5010 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648344 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648356 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648367 5010 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648379 5010 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648391 5010 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648425 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648437 5010 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648448 5010 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648458 5010 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648469 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648479 5010 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648490 5010 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648514 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648524 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648535 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648545 5010 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648554 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648565 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648575 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648590 5010 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648602 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648612 5010 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648622 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648633 5010 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648643 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648654 5010 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648669 5010 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648679 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648690 5010 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648714 5010 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648724 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648735 5010 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648745 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648759 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648769 5010 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648785 5010 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648795 5010 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648807 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648816 5010 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648831 5010 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648842 5010 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648860 5010 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648870 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648879 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648889 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648906 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648916 5010 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648928 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648939 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648949 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648959 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648968 5010 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648977 5010 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648987 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648996 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.649007 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.649019 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.649029 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.649040 5010 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.649055 5010 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.649066 5010 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.649076 5010 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.649087 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.649101 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.649112 5010 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.649121 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.654979 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.665845 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.679120 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.745426 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.755694 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 10:02:29 crc kubenswrapper[5010]: W0203 10:02:29.759832 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-b94faaa7be1ba906251a3be62e01618ff7a6ccaa2622df7668ce5bab18f3e530 WatchSource:0}: Error finding container b94faaa7be1ba906251a3be62e01618ff7a6ccaa2622df7668ce5bab18f3e530: Status 404 returned error can't find the container with id b94faaa7be1ba906251a3be62e01618ff7a6ccaa2622df7668ce5bab18f3e530 Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.763197 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 10:02:29 crc kubenswrapper[5010]: W0203 10:02:29.765863 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-ca37fe8c182aae1ca66969177c79bd19f2838340192c9967d4986dc47bdcb2f3 WatchSource:0}: Error finding container ca37fe8c182aae1ca66969177c79bd19f2838340192c9967d4986dc47bdcb2f3: Status 404 returned error can't find the container with id ca37fe8c182aae1ca66969177c79bd19f2838340192c9967d4986dc47bdcb2f3 Feb 03 10:02:29 crc kubenswrapper[5010]: W0203 10:02:29.782054 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-c81094d4e1af07cffae7a24ee49d2644f5218dd6db650c315f27055b13e9cf41 WatchSource:0}: Error finding container c81094d4e1af07cffae7a24ee49d2644f5218dd6db650c315f27055b13e9cf41: Status 404 returned error can't find the container with id c81094d4e1af07cffae7a24ee49d2644f5218dd6db650c315f27055b13e9cf41 Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.052657 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.052814 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:02:31.052783162 +0000 UTC m=+21.208759291 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.053092 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.053124 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.053203 5010 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.053235 5010 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.053327 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:31.053306926 +0000 UTC m=+21.209283065 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.053357 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:31.053347057 +0000 UTC m=+21.209323206 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.153998 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.154046 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.154170 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.154186 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.154196 5010 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.154252 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.154300 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.154314 5010 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.154272 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:31.154255529 +0000 UTC m=+21.310231658 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.154442 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:31.154403932 +0000 UTC m=+21.310380061 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.449106 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 01:38:29.527018959 +0000 UTC Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.501839 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.502186 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.505770 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.506271 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.507391 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.508557 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.510661 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.511925 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.513209 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.515168 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.516669 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.518852 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.519458 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.521374 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.521545 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.521856 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.522412 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.523373 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.523925 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.525062 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.525596 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.526295 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.527450 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.528327 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.528941 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.529455 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.530486 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.530918 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.531994 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.532670 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.533499 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.534069 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.534850 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.535364 5010 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.535458 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.537947 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.538542 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.539049 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.539997 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.542202 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.543285 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.543819 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.544799 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.545483 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.546355 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.547070 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.549189 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.550134 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.550578 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.551456 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.551993 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.553335 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.553869 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.555053 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.555591 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.556182 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.559651 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.560146 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.562758 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.576889 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.598564 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.613744 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.616920 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569"} Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.616991 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b94faaa7be1ba906251a3be62e01618ff7a6ccaa2622df7668ce5bab18f3e530"} Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.618867 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.620953 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0"} Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.621185 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.622667 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945"} Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.622743 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436"} Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.622763 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c81094d4e1af07cffae7a24ee49d2644f5218dd6db650c315f27055b13e9cf41"} Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.623868 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ca37fe8c182aae1ca66969177c79bd19f2838340192c9967d4986dc47bdcb2f3"} Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.633423 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.653008 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.667122 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.681116 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.694742 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.710702 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.723858 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.735523 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:31 crc kubenswrapper[5010]: I0203 10:02:31.061823 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:31 crc kubenswrapper[5010]: I0203 10:02:31.061906 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:31 crc kubenswrapper[5010]: I0203 10:02:31.061936 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.062025 5010 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.062027 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:02:33.061994647 +0000 UTC m=+23.217970776 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.062089 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:33.062070989 +0000 UTC m=+23.218047198 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.062178 5010 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.062377 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:33.062350686 +0000 UTC m=+23.218326805 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:02:31 crc kubenswrapper[5010]: I0203 10:02:31.162928 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:31 crc kubenswrapper[5010]: I0203 10:02:31.163142 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.163109 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.163345 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.163405 5010 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.163495 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:33.163479404 +0000 UTC m=+23.319455533 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.163307 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.163630 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.163680 5010 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.163767 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:33.163759341 +0000 UTC m=+23.319735470 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:31 crc kubenswrapper[5010]: I0203 10:02:31.449700 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 00:02:23.022517086 +0000 UTC Feb 03 10:02:31 crc kubenswrapper[5010]: I0203 10:02:31.501481 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:31 crc kubenswrapper[5010]: I0203 10:02:31.501489 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.501613 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.501664 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:32 crc kubenswrapper[5010]: I0203 10:02:32.450082 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 19:44:07.818731972 +0000 UTC Feb 03 10:02:32 crc kubenswrapper[5010]: I0203 10:02:32.501131 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:32 crc kubenswrapper[5010]: E0203 10:02:32.501286 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:32 crc kubenswrapper[5010]: I0203 10:02:32.630709 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601"} Feb 03 10:02:32 crc kubenswrapper[5010]: I0203 10:02:32.646111 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:32Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:32 crc kubenswrapper[5010]: I0203 10:02:32.661969 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:32Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:32 crc kubenswrapper[5010]: I0203 10:02:32.676938 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:32Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:32 crc kubenswrapper[5010]: I0203 10:02:32.691109 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:32Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:32 crc kubenswrapper[5010]: I0203 10:02:32.710776 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:32Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:32 crc kubenswrapper[5010]: I0203 10:02:32.730419 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:32Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:32 crc kubenswrapper[5010]: I0203 10:02:32.751075 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:32Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.081622 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.081813 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.081839 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:02:37.081800051 +0000 UTC m=+27.237776310 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.081916 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.081985 5010 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.082065 5010 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.082079 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:37.082053898 +0000 UTC m=+27.238030217 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.082234 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:37.082195451 +0000 UTC m=+27.238171610 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.183231 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.183284 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.183386 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.183403 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.183413 5010 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.183446 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.183484 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.183499 5010 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.183460 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:37.183447762 +0000 UTC m=+27.339423891 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.183579 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:37.183560505 +0000 UTC m=+27.339536644 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.450290 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 03:35:20.522623962 +0000 UTC Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.501266 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.501314 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.501498 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.501596 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.523541 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.526939 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.533233 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.536187 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.550089 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.562851 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.575800 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.590131 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.603178 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.615114 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.629501 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.641284 5010 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.658836 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.669618 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.684041 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.695083 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.704429 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.715230 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.726936 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:34 crc kubenswrapper[5010]: I0203 10:02:34.451465 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 17:21:47.538836261 +0000 UTC Feb 03 10:02:34 crc kubenswrapper[5010]: I0203 10:02:34.501726 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:34 crc kubenswrapper[5010]: E0203 10:02:34.501915 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.201489 5010 csr.go:261] certificate signing request csr-l7wqh is approved, waiting to be issued Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.248135 5010 csr.go:257] certificate signing request csr-l7wqh is issued Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.452102 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 20:01:20.531090237 +0000 UTC Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.501688 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.501738 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:35 crc kubenswrapper[5010]: E0203 10:02:35.501834 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:35 crc kubenswrapper[5010]: E0203 10:02:35.501895 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.726139 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.727833 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.727879 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.727891 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.727968 5010 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.733825 5010 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.734093 5010 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.735142 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.735183 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.735194 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.735225 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.735241 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:35Z","lastTransitionTime":"2026-02-03T10:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:35 crc kubenswrapper[5010]: E0203 10:02:35.757949 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.760887 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.760922 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.760935 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.760952 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.760962 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:35Z","lastTransitionTime":"2026-02-03T10:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:35 crc kubenswrapper[5010]: E0203 10:02:35.771485 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.774424 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.774455 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.774466 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.774482 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.774495 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:35Z","lastTransitionTime":"2026-02-03T10:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:35 crc kubenswrapper[5010]: E0203 10:02:35.794709 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.798469 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.798505 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.798518 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.798543 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.798556 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:35Z","lastTransitionTime":"2026-02-03T10:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:35 crc kubenswrapper[5010]: E0203 10:02:35.811608 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.814693 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.814730 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.814741 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.814754 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.814763 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:35Z","lastTransitionTime":"2026-02-03T10:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:35 crc kubenswrapper[5010]: E0203 10:02:35.830668 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:35 crc kubenswrapper[5010]: E0203 10:02:35.830828 5010 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.832534 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.832565 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.832578 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.832594 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.832604 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:35Z","lastTransitionTime":"2026-02-03T10:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.935288 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.935327 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.935335 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.935349 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.935358 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:35Z","lastTransitionTime":"2026-02-03T10:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.037769 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.037818 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.037830 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.037847 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.037859 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:36Z","lastTransitionTime":"2026-02-03T10:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.096433 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-89h2z"] Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.096845 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-s4xnz"] Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.097030 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-f5tpq"] Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.097065 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-89h2z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.097297 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.097338 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.099119 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-cvpds"] Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.099648 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.099729 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.100001 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.100069 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.100364 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.100412 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.100502 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.100691 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.100746 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.101002 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.101204 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.101682 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.101945 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.102237 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.102405 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.102889 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.117691 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.135880 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.139953 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.140178 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.140312 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.140402 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.140491 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:36Z","lastTransitionTime":"2026-02-03T10:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.149763 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.160788 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.178199 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.196674 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207164 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-multus-cni-dir\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207232 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-etc-kubernetes\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207258 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l8d2\" (UniqueName: \"kubernetes.io/projected/cab56d94-9407-4305-9e87-55e378a0878f-kube-api-access-6l8d2\") pod \"node-resolver-89h2z\" (UID: \"cab56d94-9407-4305-9e87-55e378a0878f\") " pod="openshift-dns/node-resolver-89h2z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207284 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e607e2ef-d3d6-4db0-b514-0d5321d9d28d-mcd-auth-proxy-config\") pod \"machine-config-daemon-s4xnz\" (UID: \"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\") " pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207304 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-var-lib-cni-bin\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207370 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-var-lib-kubelet\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207420 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-multus-daemon-config\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207528 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-var-lib-cni-multus\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207607 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-cni-binary-copy\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207648 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-hostroot\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207674 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mclqv\" (UniqueName: \"kubernetes.io/projected/e607e2ef-d3d6-4db0-b514-0d5321d9d28d-kube-api-access-mclqv\") pod \"machine-config-daemon-s4xnz\" (UID: \"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\") " pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207693 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmmvm\" (UniqueName: \"kubernetes.io/projected/d5c4274d-0165-4762-850f-b2a2ceb57c0b-kube-api-access-nmmvm\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207711 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e607e2ef-d3d6-4db0-b514-0d5321d9d28d-proxy-tls\") pod \"machine-config-daemon-s4xnz\" (UID: \"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\") " pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207730 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-run-multus-certs\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207775 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f57xn\" (UniqueName: \"kubernetes.io/projected/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-kube-api-access-f57xn\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207808 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-multus-conf-dir\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207835 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cab56d94-9407-4305-9e87-55e378a0878f-hosts-file\") pod \"node-resolver-89h2z\" (UID: \"cab56d94-9407-4305-9e87-55e378a0878f\") " pod="openshift-dns/node-resolver-89h2z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207886 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d5c4274d-0165-4762-850f-b2a2ceb57c0b-cnibin\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207906 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d5c4274d-0165-4762-850f-b2a2ceb57c0b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207957 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d5c4274d-0165-4762-850f-b2a2ceb57c0b-system-cni-dir\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207934 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207978 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d5c4274d-0165-4762-850f-b2a2ceb57c0b-cni-binary-copy\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.208081 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-cnibin\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.208107 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-multus-socket-dir-parent\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.208130 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-run-k8s-cni-cncf-io\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.208172 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e607e2ef-d3d6-4db0-b514-0d5321d9d28d-rootfs\") pod \"machine-config-daemon-s4xnz\" (UID: \"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\") " pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.208197 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d5c4274d-0165-4762-850f-b2a2ceb57c0b-os-release\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.208262 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-system-cni-dir\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.208281 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d5c4274d-0165-4762-850f-b2a2ceb57c0b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.208332 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-os-release\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.208364 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-run-netns\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.220302 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.231470 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.242777 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.243073 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.243101 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.243113 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.243130 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.243142 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:36Z","lastTransitionTime":"2026-02-03T10:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.249505 5010 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-03 09:57:35 +0000 UTC, rotation deadline is 2026-12-03 11:45:48.059894672 +0000 UTC Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.249559 5010 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7273h43m11.810338819s for next certificate rotation Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.254084 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.264124 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.274506 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.284034 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.307015 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309302 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e607e2ef-d3d6-4db0-b514-0d5321d9d28d-mcd-auth-proxy-config\") pod \"machine-config-daemon-s4xnz\" (UID: \"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\") " pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309347 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-var-lib-cni-bin\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309375 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-var-lib-kubelet\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309396 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-multus-daemon-config\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309421 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-var-lib-cni-multus\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309444 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-hostroot\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309465 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-cni-binary-copy\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309465 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-var-lib-kubelet\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309489 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmmvm\" (UniqueName: \"kubernetes.io/projected/d5c4274d-0165-4762-850f-b2a2ceb57c0b-kube-api-access-nmmvm\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309492 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-var-lib-cni-multus\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309460 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-var-lib-cni-bin\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309546 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mclqv\" (UniqueName: \"kubernetes.io/projected/e607e2ef-d3d6-4db0-b514-0d5321d9d28d-kube-api-access-mclqv\") pod \"machine-config-daemon-s4xnz\" (UID: \"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\") " pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309750 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-hostroot\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309784 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f57xn\" (UniqueName: \"kubernetes.io/projected/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-kube-api-access-f57xn\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309828 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e607e2ef-d3d6-4db0-b514-0d5321d9d28d-proxy-tls\") pod \"machine-config-daemon-s4xnz\" (UID: \"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\") " pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310006 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-run-multus-certs\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310030 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e607e2ef-d3d6-4db0-b514-0d5321d9d28d-mcd-auth-proxy-config\") pod \"machine-config-daemon-s4xnz\" (UID: \"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\") " pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310042 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-multus-conf-dir\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310070 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-run-multus-certs\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310069 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cab56d94-9407-4305-9e87-55e378a0878f-hosts-file\") pod \"node-resolver-89h2z\" (UID: \"cab56d94-9407-4305-9e87-55e378a0878f\") " pod="openshift-dns/node-resolver-89h2z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310100 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d5c4274d-0165-4762-850f-b2a2ceb57c0b-cnibin\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310110 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cab56d94-9407-4305-9e87-55e378a0878f-hosts-file\") pod \"node-resolver-89h2z\" (UID: \"cab56d94-9407-4305-9e87-55e378a0878f\") " pod="openshift-dns/node-resolver-89h2z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310115 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d5c4274d-0165-4762-850f-b2a2ceb57c0b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310138 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d5c4274d-0165-4762-850f-b2a2ceb57c0b-system-cni-dir\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310142 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-multus-conf-dir\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310155 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d5c4274d-0165-4762-850f-b2a2ceb57c0b-cni-binary-copy\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310173 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-multus-socket-dir-parent\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310190 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-run-k8s-cni-cncf-io\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310259 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-cni-binary-copy\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310275 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-cnibin\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310295 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-multus-daemon-config\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310315 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d5c4274d-0165-4762-850f-b2a2ceb57c0b-cnibin\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310304 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e607e2ef-d3d6-4db0-b514-0d5321d9d28d-rootfs\") pod \"machine-config-daemon-s4xnz\" (UID: \"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\") " pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310343 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-run-k8s-cni-cncf-io\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310355 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-multus-socket-dir-parent\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310355 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d5c4274d-0165-4762-850f-b2a2ceb57c0b-os-release\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310381 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-cnibin\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310397 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-system-cni-dir\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310325 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e607e2ef-d3d6-4db0-b514-0d5321d9d28d-rootfs\") pod \"machine-config-daemon-s4xnz\" (UID: \"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\") " pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310426 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d5c4274d-0165-4762-850f-b2a2ceb57c0b-system-cni-dir\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310433 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-os-release\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310452 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d5c4274d-0165-4762-850f-b2a2ceb57c0b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310463 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-system-cni-dir\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310479 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-run-netns\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310470 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-os-release\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310402 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d5c4274d-0165-4762-850f-b2a2ceb57c0b-os-release\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310507 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l8d2\" (UniqueName: \"kubernetes.io/projected/cab56d94-9407-4305-9e87-55e378a0878f-kube-api-access-6l8d2\") pod \"node-resolver-89h2z\" (UID: \"cab56d94-9407-4305-9e87-55e378a0878f\") " pod="openshift-dns/node-resolver-89h2z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310539 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-run-netns\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310567 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-multus-cni-dir\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310599 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-etc-kubernetes\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310663 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-multus-cni-dir\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310694 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-etc-kubernetes\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310937 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d5c4274d-0165-4762-850f-b2a2ceb57c0b-cni-binary-copy\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.311399 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d5c4274d-0165-4762-850f-b2a2ceb57c0b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.311635 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d5c4274d-0165-4762-850f-b2a2ceb57c0b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.316945 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e607e2ef-d3d6-4db0-b514-0d5321d9d28d-proxy-tls\") pod \"machine-config-daemon-s4xnz\" (UID: \"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\") " pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.331920 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmmvm\" (UniqueName: \"kubernetes.io/projected/d5c4274d-0165-4762-850f-b2a2ceb57c0b-kube-api-access-nmmvm\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.334533 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.335837 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mclqv\" (UniqueName: \"kubernetes.io/projected/e607e2ef-d3d6-4db0-b514-0d5321d9d28d-kube-api-access-mclqv\") pod \"machine-config-daemon-s4xnz\" (UID: \"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\") " pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.339873 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f57xn\" (UniqueName: \"kubernetes.io/projected/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-kube-api-access-f57xn\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.345765 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.345793 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.345802 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.345815 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.345825 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:36Z","lastTransitionTime":"2026-02-03T10:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.350203 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l8d2\" (UniqueName: \"kubernetes.io/projected/cab56d94-9407-4305-9e87-55e378a0878f-kube-api-access-6l8d2\") pod \"node-resolver-89h2z\" (UID: \"cab56d94-9407-4305-9e87-55e378a0878f\") " pod="openshift-dns/node-resolver-89h2z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.351010 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.363434 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.375643 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.384013 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.395681 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.410842 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-89h2z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.420029 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: W0203 10:02:36.424188 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcab56d94_9407_4305_9e87_55e378a0878f.slice/crio-e19e7361e8845bd89910cf96bc0493054812d1a72d9f87b02465696b42a4be0c WatchSource:0}: Error finding container e19e7361e8845bd89910cf96bc0493054812d1a72d9f87b02465696b42a4be0c: Status 404 returned error can't find the container with id e19e7361e8845bd89910cf96bc0493054812d1a72d9f87b02465696b42a4be0c Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.429462 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: W0203 10:02:36.433813 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b16bcfb_db8c_4fbe_98f3_2d6c5353cfef.slice/crio-b7de1ec682521ef69328307beddc09d19a5c9f3f8c16189a78b2019cf09f91de WatchSource:0}: Error finding container b7de1ec682521ef69328307beddc09d19a5c9f3f8c16189a78b2019cf09f91de: Status 404 returned error can't find the container with id b7de1ec682521ef69328307beddc09d19a5c9f3f8c16189a78b2019cf09f91de Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.437833 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: W0203 10:02:36.446181 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode607e2ef_d3d6_4db0_b514_0d5321d9d28d.slice/crio-b66aac4a67055d24ac3f5a7b433b8c06a459a551298364f2b91d5e5e6ab6845a WatchSource:0}: Error finding container b66aac4a67055d24ac3f5a7b433b8c06a459a551298364f2b91d5e5e6ab6845a: Status 404 returned error can't find the container with id b66aac4a67055d24ac3f5a7b433b8c06a459a551298364f2b91d5e5e6ab6845a Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.447288 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.447333 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.447349 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.447366 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.447380 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:36Z","lastTransitionTime":"2026-02-03T10:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.452236 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 07:12:07.360886233 +0000 UTC Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.475420 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-68p7p"] Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.476447 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.478330 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.478349 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.478530 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.479008 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.479086 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.481289 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.481206 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.500635 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.502200 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:36 crc kubenswrapper[5010]: E0203 10:02:36.502340 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.511683 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.512064 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-slash\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.512108 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-node-log\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.512125 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-openvswitch\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.512142 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovn-node-metrics-cert\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.512159 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xwzz\" (UniqueName: \"kubernetes.io/projected/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-kube-api-access-2xwzz\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.512180 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-kubelet\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.513621 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-systemd-units\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515337 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-var-lib-openvswitch\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515425 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-run-ovn-kubernetes\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515444 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-env-overrides\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515470 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovnkube-script-lib\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515490 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-run-netns\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515510 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-cni-bin\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515525 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-cni-netd\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515540 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovnkube-config\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515598 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-log-socket\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515633 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-etc-openvswitch\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515657 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-ovn\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515685 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515727 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-systemd\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.520590 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.533103 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.543656 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.552242 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.552278 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.552292 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.552308 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.552319 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:36Z","lastTransitionTime":"2026-02-03T10:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.556316 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.568617 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.579830 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.594193 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.604252 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.616883 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovnkube-script-lib\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.616934 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-run-netns\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.616957 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-cni-bin\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.616978 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-cni-netd\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.616999 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovnkube-config\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617021 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-etc-openvswitch\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617044 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-ovn\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617067 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-log-socket\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617088 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617131 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-systemd\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617151 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-slash\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617189 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-node-log\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617204 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-openvswitch\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617239 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovn-node-metrics-cert\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617256 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xwzz\" (UniqueName: \"kubernetes.io/projected/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-kube-api-access-2xwzz\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617282 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-kubelet\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617311 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-systemd-units\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617326 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-var-lib-openvswitch\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617342 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-env-overrides\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617362 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-run-ovn-kubernetes\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.618238 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-run-ovn-kubernetes\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.618912 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-run-netns\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.618984 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-cni-bin\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.619011 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-cni-netd\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.619176 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-kubelet\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.619204 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-node-log\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.619284 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-etc-openvswitch\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.619294 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-log-socket\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.619290 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-var-lib-openvswitch\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.619240 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-openvswitch\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.619371 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-systemd-units\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.619444 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-ovn\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.619496 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-slash\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.619513 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-systemd\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.619803 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.620029 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovnkube-config\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.620060 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-env-overrides\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.620045 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovnkube-script-lib\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.623162 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.625552 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovn-node-metrics-cert\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.635019 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xwzz\" (UniqueName: \"kubernetes.io/projected/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-kube-api-access-2xwzz\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.637624 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.648311 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-89h2z" event={"ID":"cab56d94-9407-4305-9e87-55e378a0878f","Type":"ContainerStarted","Data":"e19e7361e8845bd89910cf96bc0493054812d1a72d9f87b02465696b42a4be0c"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.650539 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"b66aac4a67055d24ac3f5a7b433b8c06a459a551298364f2b91d5e5e6ab6845a"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.651286 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.653838 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f5tpq" event={"ID":"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef","Type":"ContainerStarted","Data":"b7de1ec682521ef69328307beddc09d19a5c9f3f8c16189a78b2019cf09f91de"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.653939 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.654008 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.654022 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.654047 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.654065 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:36Z","lastTransitionTime":"2026-02-03T10:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.654979 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" event={"ID":"d5c4274d-0165-4762-850f-b2a2ceb57c0b","Type":"ContainerStarted","Data":"0e1fef134ebf63c229dae47579579581b3e3f1fa051f07556569567c2de2d944"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.756924 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.756966 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.756975 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.756997 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.757011 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:36Z","lastTransitionTime":"2026-02-03T10:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.860193 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.860278 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.860288 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.860301 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.860310 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:36Z","lastTransitionTime":"2026-02-03T10:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.901015 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: W0203 10:02:36.914440 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafbb630a_0dee_4c9c_90ff_cb710b9da3f2.slice/crio-397d6ad2bb41a4df9c0dc30fd14d52b9e67cbf17ccd52dacef60dc2182647ba3 WatchSource:0}: Error finding container 397d6ad2bb41a4df9c0dc30fd14d52b9e67cbf17ccd52dacef60dc2182647ba3: Status 404 returned error can't find the container with id 397d6ad2bb41a4df9c0dc30fd14d52b9e67cbf17ccd52dacef60dc2182647ba3 Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.963257 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.963290 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.963301 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.963321 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.963332 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:36Z","lastTransitionTime":"2026-02-03T10:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.065918 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.065963 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.065976 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.065993 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.066006 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:37Z","lastTransitionTime":"2026-02-03T10:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.127626 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.127721 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.127773 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:02:45.127749562 +0000 UTC m=+35.283725701 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.127837 5010 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.127857 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.127880 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:45.127870355 +0000 UTC m=+35.283846574 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.128008 5010 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.128126 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:45.1280736 +0000 UTC m=+35.284049739 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.168695 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.168728 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.168737 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.168752 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.168762 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:37Z","lastTransitionTime":"2026-02-03T10:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.229432 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.229470 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.229605 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.229622 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.229633 5010 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.229672 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:45.229659759 +0000 UTC m=+35.385635888 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.229949 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.229974 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.229986 5010 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.230028 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:45.230016268 +0000 UTC m=+35.385992397 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.270879 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.270949 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.270963 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.270980 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.270993 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:37Z","lastTransitionTime":"2026-02-03T10:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.373277 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.373319 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.373329 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.373346 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.373358 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:37Z","lastTransitionTime":"2026-02-03T10:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.453253 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 10:11:51.948948132 +0000 UTC Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.475202 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.475259 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.475269 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.475287 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.475297 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:37Z","lastTransitionTime":"2026-02-03T10:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.501739 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.501806 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.501872 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.501991 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.577179 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.577241 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.577257 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.577274 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.577285 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:37Z","lastTransitionTime":"2026-02-03T10:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.662092 5010 generic.go:334] "Generic (PLEG): container finished" podID="d5c4274d-0165-4762-850f-b2a2ceb57c0b" containerID="5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6" exitCode=0 Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.662143 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" event={"ID":"d5c4274d-0165-4762-850f-b2a2ceb57c0b","Type":"ContainerDied","Data":"5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.664148 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-89h2z" event={"ID":"cab56d94-9407-4305-9e87-55e378a0878f","Type":"ContainerStarted","Data":"a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.665676 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53" exitCode=0 Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.665738 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.665766 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"397d6ad2bb41a4df9c0dc30fd14d52b9e67cbf17ccd52dacef60dc2182647ba3"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.668437 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.668474 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.669672 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f5tpq" event={"ID":"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef","Type":"ContainerStarted","Data":"b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.679848 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.679884 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.679893 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.679907 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.679918 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:37Z","lastTransitionTime":"2026-02-03T10:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.681825 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.695126 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.707197 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.720122 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.731337 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.741459 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.755149 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.768355 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.780384 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.782244 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.782278 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.782290 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.782306 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.782319 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:37Z","lastTransitionTime":"2026-02-03T10:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.792365 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.805235 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.816340 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.835295 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.851313 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.863818 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.875530 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.885159 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.885207 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.885235 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.885254 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.885266 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:37Z","lastTransitionTime":"2026-02-03T10:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.888956 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.909148 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.921881 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.935043 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.951085 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.962754 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.974527 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.987921 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.988606 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.988635 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.988645 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.988660 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.988670 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:37Z","lastTransitionTime":"2026-02-03T10:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.004730 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.019740 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.091566 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.091632 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.091647 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.091663 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.091674 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:38Z","lastTransitionTime":"2026-02-03T10:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.193569 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.193614 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.193625 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.193640 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.193656 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:38Z","lastTransitionTime":"2026-02-03T10:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.295825 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.296144 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.296153 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.296168 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.296178 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:38Z","lastTransitionTime":"2026-02-03T10:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.397929 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.397970 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.397982 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.397999 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.398011 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:38Z","lastTransitionTime":"2026-02-03T10:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.453928 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 19:59:47.497532664 +0000 UTC Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.500468 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.500518 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.500531 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.500549 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.500562 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:38Z","lastTransitionTime":"2026-02-03T10:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.501135 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:38 crc kubenswrapper[5010]: E0203 10:02:38.501247 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.603138 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.603185 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.603198 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.603242 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.603259 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:38Z","lastTransitionTime":"2026-02-03T10:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.676299 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.676345 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.676356 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.676368 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.676376 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.678951 5010 generic.go:334] "Generic (PLEG): container finished" podID="d5c4274d-0165-4762-850f-b2a2ceb57c0b" containerID="2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb" exitCode=0 Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.679105 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" event={"ID":"d5c4274d-0165-4762-850f-b2a2ceb57c0b","Type":"ContainerDied","Data":"2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.696826 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.706389 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.706454 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.706467 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.706488 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.706503 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:38Z","lastTransitionTime":"2026-02-03T10:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.712112 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.726250 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.740158 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.752375 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.764480 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.778967 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.798120 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.812374 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.812551 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.812587 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.812617 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.812632 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.812642 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:38Z","lastTransitionTime":"2026-02-03T10:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.825643 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.844272 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.859126 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.871617 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.914743 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.914780 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.914790 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.914806 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.914817 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:38Z","lastTransitionTime":"2026-02-03T10:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.016800 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.016846 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.016857 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.016876 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.016897 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:39Z","lastTransitionTime":"2026-02-03T10:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.021030 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.035201 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-7lfkq"] Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.035363 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.035736 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7lfkq" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.037064 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.037148 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.037681 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.039984 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.050789 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.059620 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.069607 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.079230 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.089378 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.100128 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.119778 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.119821 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.119833 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.119849 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.119861 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:39Z","lastTransitionTime":"2026-02-03T10:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.121392 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.132669 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.145286 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.149459 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a594fab0-c299-4489-be04-95a81c6dd272-serviceca\") pod \"node-ca-7lfkq\" (UID: \"a594fab0-c299-4489-be04-95a81c6dd272\") " pod="openshift-image-registry/node-ca-7lfkq" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.149505 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a594fab0-c299-4489-be04-95a81c6dd272-host\") pod \"node-ca-7lfkq\" (UID: \"a594fab0-c299-4489-be04-95a81c6dd272\") " pod="openshift-image-registry/node-ca-7lfkq" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.149531 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llslg\" (UniqueName: \"kubernetes.io/projected/a594fab0-c299-4489-be04-95a81c6dd272-kube-api-access-llslg\") pod \"node-ca-7lfkq\" (UID: \"a594fab0-c299-4489-be04-95a81c6dd272\") " pod="openshift-image-registry/node-ca-7lfkq" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.156355 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.168790 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.179759 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.192748 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.205610 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.217202 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.221844 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.221878 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.221901 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.221916 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.221926 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:39Z","lastTransitionTime":"2026-02-03T10:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.231402 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.244573 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.250911 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a594fab0-c299-4489-be04-95a81c6dd272-host\") pod \"node-ca-7lfkq\" (UID: \"a594fab0-c299-4489-be04-95a81c6dd272\") " pod="openshift-image-registry/node-ca-7lfkq" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.250957 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llslg\" (UniqueName: \"kubernetes.io/projected/a594fab0-c299-4489-be04-95a81c6dd272-kube-api-access-llslg\") pod \"node-ca-7lfkq\" (UID: \"a594fab0-c299-4489-be04-95a81c6dd272\") " pod="openshift-image-registry/node-ca-7lfkq" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.251035 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a594fab0-c299-4489-be04-95a81c6dd272-serviceca\") pod \"node-ca-7lfkq\" (UID: \"a594fab0-c299-4489-be04-95a81c6dd272\") " pod="openshift-image-registry/node-ca-7lfkq" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.251424 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a594fab0-c299-4489-be04-95a81c6dd272-host\") pod \"node-ca-7lfkq\" (UID: \"a594fab0-c299-4489-be04-95a81c6dd272\") " pod="openshift-image-registry/node-ca-7lfkq" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.251889 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a594fab0-c299-4489-be04-95a81c6dd272-serviceca\") pod \"node-ca-7lfkq\" (UID: \"a594fab0-c299-4489-be04-95a81c6dd272\") " pod="openshift-image-registry/node-ca-7lfkq" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.258444 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.269791 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.269865 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llslg\" (UniqueName: \"kubernetes.io/projected/a594fab0-c299-4489-be04-95a81c6dd272-kube-api-access-llslg\") pod \"node-ca-7lfkq\" (UID: \"a594fab0-c299-4489-be04-95a81c6dd272\") " pod="openshift-image-registry/node-ca-7lfkq" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.281036 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.291253 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.301547 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.318830 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.324451 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.324489 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.324500 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.324515 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.324525 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:39Z","lastTransitionTime":"2026-02-03T10:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.331158 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.343765 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.351363 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7lfkq" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.356551 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.427451 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.427723 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.427734 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.427748 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.427759 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:39Z","lastTransitionTime":"2026-02-03T10:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.455013 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 08:50:26.429756338 +0000 UTC Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.501242 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.501351 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:39 crc kubenswrapper[5010]: E0203 10:02:39.501426 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:39 crc kubenswrapper[5010]: E0203 10:02:39.502249 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.533245 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.533297 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.533311 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.533331 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.533349 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:39Z","lastTransitionTime":"2026-02-03T10:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.635991 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.636033 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.636043 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.636059 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.636070 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:39Z","lastTransitionTime":"2026-02-03T10:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.685327 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.686492 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7lfkq" event={"ID":"a594fab0-c299-4489-be04-95a81c6dd272","Type":"ContainerStarted","Data":"5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.686538 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7lfkq" event={"ID":"a594fab0-c299-4489-be04-95a81c6dd272","Type":"ContainerStarted","Data":"4209a2a84405f3e5ebd4b7fefddd1dd9531d4d650846b426212c9042285e2146"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.688887 5010 generic.go:334] "Generic (PLEG): container finished" podID="d5c4274d-0165-4762-850f-b2a2ceb57c0b" containerID="e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa" exitCode=0 Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.688916 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" event={"ID":"d5c4274d-0165-4762-850f-b2a2ceb57c0b","Type":"ContainerDied","Data":"e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.699230 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.709374 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.723463 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.739465 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.739520 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.739538 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.739560 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.739577 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:39Z","lastTransitionTime":"2026-02-03T10:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.739829 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.752872 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.763403 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.784733 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.793495 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.807942 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.820937 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.832913 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.842240 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.842272 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.842281 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.842295 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.842305 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:39Z","lastTransitionTime":"2026-02-03T10:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.844710 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.858759 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.874171 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.890306 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.902854 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.914159 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.924200 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.940114 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.944716 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.944747 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.944756 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.944770 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.944780 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:39Z","lastTransitionTime":"2026-02-03T10:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.976932 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.000327 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.015454 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.026720 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.040896 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.046397 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.046429 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.046443 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.046457 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.046467 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:40Z","lastTransitionTime":"2026-02-03T10:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.052133 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.069631 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.082506 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.095140 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.148821 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.148868 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.148880 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.148898 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.148909 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:40Z","lastTransitionTime":"2026-02-03T10:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.251826 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.251879 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.251893 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.251922 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.251937 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:40Z","lastTransitionTime":"2026-02-03T10:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.291587 5010 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.353937 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.354151 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.354265 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.354350 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.354436 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:40Z","lastTransitionTime":"2026-02-03T10:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.455453 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 04:53:35.926305757 +0000 UTC Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.457488 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.457542 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.457553 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.457571 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.457587 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:40Z","lastTransitionTime":"2026-02-03T10:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.501376 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:40 crc kubenswrapper[5010]: E0203 10:02:40.501558 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.514946 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.526916 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.535922 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.549122 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.559353 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.559383 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.559392 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.559404 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.559413 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:40Z","lastTransitionTime":"2026-02-03T10:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.561944 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.576032 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.592406 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.605355 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.621794 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.631957 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.648499 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.657716 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.661242 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.661293 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.661307 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.661324 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.661336 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:40Z","lastTransitionTime":"2026-02-03T10:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.668079 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.679694 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.694242 5010 generic.go:334] "Generic (PLEG): container finished" podID="d5c4274d-0165-4762-850f-b2a2ceb57c0b" containerID="443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e" exitCode=0 Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.694311 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" event={"ID":"d5c4274d-0165-4762-850f-b2a2ceb57c0b","Type":"ContainerDied","Data":"443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e"} Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.710062 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.722162 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.733698 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.742592 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.758458 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.764969 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.765030 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.765042 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.765084 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.765096 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:40Z","lastTransitionTime":"2026-02-03T10:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.770739 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.787764 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.798364 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.810896 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.825946 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.866468 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.866996 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.867013 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.867022 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.867056 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.867067 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:40Z","lastTransitionTime":"2026-02-03T10:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.907938 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.949097 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.969767 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.969809 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.969819 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.969835 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.969845 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:40Z","lastTransitionTime":"2026-02-03T10:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.989577 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.071835 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.071873 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.071884 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.071901 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.071912 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:41Z","lastTransitionTime":"2026-02-03T10:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.174815 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.174901 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.174921 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.174945 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.174962 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:41Z","lastTransitionTime":"2026-02-03T10:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.277327 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.277370 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.277385 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.277403 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.277416 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:41Z","lastTransitionTime":"2026-02-03T10:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.379646 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.379693 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.379710 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.379731 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.379746 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:41Z","lastTransitionTime":"2026-02-03T10:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.456238 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 13:55:02.155463336 +0000 UTC Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.482424 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.482461 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.482470 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.482485 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.482494 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:41Z","lastTransitionTime":"2026-02-03T10:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.502059 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:41 crc kubenswrapper[5010]: E0203 10:02:41.502241 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.502067 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:41 crc kubenswrapper[5010]: E0203 10:02:41.502438 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.585342 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.585383 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.585391 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.585433 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.585444 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:41Z","lastTransitionTime":"2026-02-03T10:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.688684 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.688783 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.688803 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.688828 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.688845 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:41Z","lastTransitionTime":"2026-02-03T10:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.703401 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e"} Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.706862 5010 generic.go:334] "Generic (PLEG): container finished" podID="d5c4274d-0165-4762-850f-b2a2ceb57c0b" containerID="32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9" exitCode=0 Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.706904 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" event={"ID":"d5c4274d-0165-4762-850f-b2a2ceb57c0b","Type":"ContainerDied","Data":"32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9"} Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.721411 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.739047 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.755981 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.771350 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.790247 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.790972 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.791001 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.791010 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.791023 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.791034 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:41Z","lastTransitionTime":"2026-02-03T10:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.800709 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.812800 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.824565 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.836898 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.849391 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.862478 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.877586 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.887710 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.893115 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.893182 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.893192 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.893205 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.893251 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:41Z","lastTransitionTime":"2026-02-03T10:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.896410 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.995295 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.995323 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.995331 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.995345 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.995356 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:41Z","lastTransitionTime":"2026-02-03T10:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.098303 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.098356 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.098375 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.098402 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.098421 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:42Z","lastTransitionTime":"2026-02-03T10:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.200952 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.200992 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.201005 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.201020 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.201031 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:42Z","lastTransitionTime":"2026-02-03T10:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.304073 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.304127 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.304142 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.304163 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.304178 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:42Z","lastTransitionTime":"2026-02-03T10:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.406649 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.406694 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.406709 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.406733 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.406753 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:42Z","lastTransitionTime":"2026-02-03T10:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.456440 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 15:36:23.535613324 +0000 UTC Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.501975 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:42 crc kubenswrapper[5010]: E0203 10:02:42.502099 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.508268 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.508303 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.508311 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.508325 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.508336 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:42Z","lastTransitionTime":"2026-02-03T10:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.611061 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.611101 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.611110 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.611123 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.611131 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:42Z","lastTransitionTime":"2026-02-03T10:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.712901 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.712952 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.712970 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.712990 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.713005 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:42Z","lastTransitionTime":"2026-02-03T10:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.715671 5010 generic.go:334] "Generic (PLEG): container finished" podID="d5c4274d-0165-4762-850f-b2a2ceb57c0b" containerID="da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d" exitCode=0 Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.715708 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" event={"ID":"d5c4274d-0165-4762-850f-b2a2ceb57c0b","Type":"ContainerDied","Data":"da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d"} Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.733855 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.745455 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.762478 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.784539 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.795990 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.809671 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.814685 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.814720 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.814729 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.814744 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.814754 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:42Z","lastTransitionTime":"2026-02-03T10:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.828727 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.839588 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.850880 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.859726 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.870848 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.880375 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.889986 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.901005 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.917003 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.917041 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.917051 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.917067 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.917079 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:42Z","lastTransitionTime":"2026-02-03T10:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.018656 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.018901 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.018908 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.018921 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.018930 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:43Z","lastTransitionTime":"2026-02-03T10:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.121101 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.121133 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.121141 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.121155 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.121164 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:43Z","lastTransitionTime":"2026-02-03T10:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.223372 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.223432 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.223447 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.223468 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.223481 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:43Z","lastTransitionTime":"2026-02-03T10:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.325698 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.325730 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.325742 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.325757 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.325768 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:43Z","lastTransitionTime":"2026-02-03T10:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.427463 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.427496 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.427504 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.427518 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.427527 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:43Z","lastTransitionTime":"2026-02-03T10:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.457048 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 19:04:16.66007969 +0000 UTC Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.501671 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.501671 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:43 crc kubenswrapper[5010]: E0203 10:02:43.501839 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:43 crc kubenswrapper[5010]: E0203 10:02:43.501949 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.529387 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.529425 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.529436 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.529453 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.529466 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:43Z","lastTransitionTime":"2026-02-03T10:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.635108 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.635148 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.635156 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.635169 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.635177 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:43Z","lastTransitionTime":"2026-02-03T10:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.723001 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.723614 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.727589 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" event={"ID":"d5c4274d-0165-4762-850f-b2a2ceb57c0b","Type":"ContainerStarted","Data":"1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.738232 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.738492 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.738488 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.738597 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.738751 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.738765 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:43Z","lastTransitionTime":"2026-02-03T10:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.749418 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.751270 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.762505 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.774928 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.786000 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.803054 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.814957 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.827182 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.839427 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.841131 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.841175 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.841185 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.841201 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.841476 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:43Z","lastTransitionTime":"2026-02-03T10:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.851755 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.865923 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.877579 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.885690 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.898173 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.911849 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.922399 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.933289 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.943990 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.944029 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.944040 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.944055 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.944067 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:43Z","lastTransitionTime":"2026-02-03T10:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.944949 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.956275 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.965927 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.980398 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.989551 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.001887 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.011176 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.020806 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.032196 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.042400 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.046205 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.046274 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.046289 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.046306 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.046318 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:44Z","lastTransitionTime":"2026-02-03T10:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.059116 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.149063 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.149120 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.149142 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.149166 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.149181 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:44Z","lastTransitionTime":"2026-02-03T10:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.253729 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.253814 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.253859 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.253890 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.253913 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:44Z","lastTransitionTime":"2026-02-03T10:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.356036 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.356083 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.356094 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.356109 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.356119 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:44Z","lastTransitionTime":"2026-02-03T10:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.457397 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 14:55:06.361621089 +0000 UTC Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.459544 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.459596 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.459613 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.459637 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.459655 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:44Z","lastTransitionTime":"2026-02-03T10:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.502200 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:44 crc kubenswrapper[5010]: E0203 10:02:44.502446 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.562881 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.562970 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.563035 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.563065 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.563085 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:44Z","lastTransitionTime":"2026-02-03T10:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.665962 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.666006 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.666017 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.666034 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.666044 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:44Z","lastTransitionTime":"2026-02-03T10:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.731961 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.732742 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.761197 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.768193 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.768304 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.768330 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.768364 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.768389 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:44Z","lastTransitionTime":"2026-02-03T10:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.772193 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.785250 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.796485 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.810672 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.823802 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.835310 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.855627 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.871132 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.871178 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.871189 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.871209 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.871238 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:44Z","lastTransitionTime":"2026-02-03T10:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.875748 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.886975 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.902600 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.915742 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.932436 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.944853 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.961019 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.973955 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.973985 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.973995 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.974011 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.974020 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:44Z","lastTransitionTime":"2026-02-03T10:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.076888 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.076923 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.076935 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.076951 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.076964 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.183816 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.183859 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.183870 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.183886 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.183896 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.210179 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.210355 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:03:01.210320596 +0000 UTC m=+51.366296725 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.210423 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.210448 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.210541 5010 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.210599 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:03:01.210583612 +0000 UTC m=+51.366559801 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.210657 5010 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.210756 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:03:01.210735966 +0000 UTC m=+51.366712095 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.286677 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.286727 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.286737 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.286760 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.286772 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.311717 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.311779 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.311947 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.311986 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.311998 5010 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.311994 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.312024 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.312039 5010 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.312064 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 10:03:01.312045429 +0000 UTC m=+51.468021558 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.312348 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 10:03:01.312320685 +0000 UTC m=+51.468296884 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.389288 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.389327 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.389336 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.389349 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.389358 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.458165 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 03:52:28.738917298 +0000 UTC Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.491117 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.491151 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.491159 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.491204 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.491230 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.501550 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.501647 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.501732 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.501913 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.595961 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.596037 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.596048 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.596062 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.596075 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.698391 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.698688 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.698697 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.698712 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.698722 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.735287 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.801874 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.801905 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.801917 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.801932 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.801941 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.904431 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.904468 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.904476 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.904490 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.904503 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.905388 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.905419 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.905430 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.905442 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.905451 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.916820 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:45Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.919491 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.919514 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.919522 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.919534 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.919543 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.935721 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:45Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.942275 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.942315 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.942326 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.942365 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.942377 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.956878 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:45Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.961786 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.961818 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.961830 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.961845 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.961856 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.981743 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:45Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.985198 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.985246 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.985258 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.985277 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.985289 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.997887 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:45Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.998059 5010 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.006018 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.006054 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.006066 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.006080 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.006090 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:46Z","lastTransitionTime":"2026-02-03T10:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.108287 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.108337 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.108349 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.108363 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.108373 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:46Z","lastTransitionTime":"2026-02-03T10:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.211209 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.211360 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.211379 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.211402 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.211453 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:46Z","lastTransitionTime":"2026-02-03T10:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.314424 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.314498 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.314512 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.314537 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.314548 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:46Z","lastTransitionTime":"2026-02-03T10:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.417466 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.417545 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.417568 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.417589 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.417642 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:46Z","lastTransitionTime":"2026-02-03T10:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.493253 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 18:15:08.316774449 +0000 UTC Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.501842 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:46 crc kubenswrapper[5010]: E0203 10:02:46.502061 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.519800 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.519859 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.519884 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.519905 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.519921 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:46Z","lastTransitionTime":"2026-02-03T10:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.622076 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.622160 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.622179 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.622199 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.622233 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:46Z","lastTransitionTime":"2026-02-03T10:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.724929 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.724968 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.724979 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.724993 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.725005 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:46Z","lastTransitionTime":"2026-02-03T10:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.745329 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/0.log" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.749888 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658" exitCode=1 Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.749945 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658"} Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.751020 5010 scope.go:117] "RemoveContainer" containerID="6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.765357 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.782898 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.800823 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.816911 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.827022 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.827075 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.827092 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.827116 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.827132 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:46Z","lastTransitionTime":"2026-02-03T10:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.832248 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.844893 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.859323 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.876795 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:46Z\\\",\\\"message\\\":\\\"534 6343 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 10:02:45.979597 6343 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 10:02:45.979604 6343 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 10:02:45.979608 6343 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 10:02:45.979632 6343 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 10:02:45.979634 6343 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 10:02:45.979638 6343 factory.go:656] Stopping watch factory\\\\nI0203 10:02:45.979653 6343 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 10:02:45.979655 6343 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 10:02:45.979673 6343 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 10:02:45.979691 6343 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 10:02:45.979842 6343 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.895021 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.909878 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.923827 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.929389 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.929424 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.929432 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.929447 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.929459 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:46Z","lastTransitionTime":"2026-02-03T10:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.937016 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.951840 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.962571 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.031621 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.031705 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.031723 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.031810 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.031845 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:47Z","lastTransitionTime":"2026-02-03T10:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.133970 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.134025 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.134043 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.134065 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.134081 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:47Z","lastTransitionTime":"2026-02-03T10:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.236160 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.236189 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.236197 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.236236 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.236245 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:47Z","lastTransitionTime":"2026-02-03T10:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.339096 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.339138 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.339149 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.339168 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.339179 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:47Z","lastTransitionTime":"2026-02-03T10:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.442227 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.442284 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.442299 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.442319 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.442336 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:47Z","lastTransitionTime":"2026-02-03T10:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.493611 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 06:26:15.850823457 +0000 UTC Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.501854 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.501967 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:47 crc kubenswrapper[5010]: E0203 10:02:47.502019 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:47 crc kubenswrapper[5010]: E0203 10:02:47.502154 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.546645 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.546700 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.546715 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.546739 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.546757 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:47Z","lastTransitionTime":"2026-02-03T10:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.649994 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.650059 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.650085 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.650109 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.650122 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:47Z","lastTransitionTime":"2026-02-03T10:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.752348 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.752426 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.752443 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.752468 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.752486 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:47Z","lastTransitionTime":"2026-02-03T10:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.756046 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/0.log" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.760199 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c"} Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.760389 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.777693 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.791972 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.808434 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.818632 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.848862 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:46Z\\\",\\\"message\\\":\\\"534 6343 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 10:02:45.979597 6343 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 10:02:45.979604 6343 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 10:02:45.979608 6343 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 10:02:45.979632 6343 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 10:02:45.979634 6343 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 10:02:45.979638 6343 factory.go:656] Stopping watch factory\\\\nI0203 10:02:45.979653 6343 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 10:02:45.979655 6343 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 10:02:45.979673 6343 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 10:02:45.979691 6343 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 10:02:45.979842 6343 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.854673 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.854717 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.854728 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.854746 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.854758 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:47Z","lastTransitionTime":"2026-02-03T10:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.866680 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.882346 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.897088 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.910101 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.922429 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.935657 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.948563 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.957545 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.957576 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.957586 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.957601 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.957611 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:47Z","lastTransitionTime":"2026-02-03T10:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.960116 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.969503 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.060345 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.060407 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.060422 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.060446 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.060462 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:48Z","lastTransitionTime":"2026-02-03T10:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.163682 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.163740 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.163761 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.163785 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.163806 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:48Z","lastTransitionTime":"2026-02-03T10:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.267812 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.267888 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.267916 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.267947 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.267971 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:48Z","lastTransitionTime":"2026-02-03T10:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.370942 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.371271 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.371384 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.371423 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.371436 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:48Z","lastTransitionTime":"2026-02-03T10:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.474362 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.474421 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.474433 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.474453 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.474466 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:48Z","lastTransitionTime":"2026-02-03T10:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.493908 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 21:03:11.616968758 +0000 UTC Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.501377 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:48 crc kubenswrapper[5010]: E0203 10:02:48.503092 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.577048 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.577289 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.577354 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.577422 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.577480 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:48Z","lastTransitionTime":"2026-02-03T10:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.667677 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl"] Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.668619 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.670814 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.670852 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.680455 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.680651 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.680734 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.680814 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.680900 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:48Z","lastTransitionTime":"2026-02-03T10:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.689712 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:46Z\\\",\\\"message\\\":\\\"534 6343 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 10:02:45.979597 6343 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 10:02:45.979604 6343 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 10:02:45.979608 6343 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 10:02:45.979632 6343 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 10:02:45.979634 6343 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 10:02:45.979638 6343 factory.go:656] Stopping watch factory\\\\nI0203 10:02:45.979653 6343 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 10:02:45.979655 6343 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 10:02:45.979673 6343 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 10:02:45.979691 6343 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 10:02:45.979842 6343 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.702451 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.713028 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.714337 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bde7a589-c2e8-48b2-aa06-2fb99731df31-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4vzdl\" (UID: \"bde7a589-c2e8-48b2-aa06-2fb99731df31\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.714376 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bde7a589-c2e8-48b2-aa06-2fb99731df31-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4vzdl\" (UID: \"bde7a589-c2e8-48b2-aa06-2fb99731df31\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.714451 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bde7a589-c2e8-48b2-aa06-2fb99731df31-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4vzdl\" (UID: \"bde7a589-c2e8-48b2-aa06-2fb99731df31\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.714492 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fhp4\" (UniqueName: \"kubernetes.io/projected/bde7a589-c2e8-48b2-aa06-2fb99731df31-kube-api-access-8fhp4\") pod \"ovnkube-control-plane-749d76644c-4vzdl\" (UID: \"bde7a589-c2e8-48b2-aa06-2fb99731df31\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.723267 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.733799 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.748015 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.760123 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.764576 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/1.log" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.765235 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/0.log" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.767419 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c" exitCode=1 Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.767464 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c"} Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.767507 5010 scope.go:117] "RemoveContainer" containerID="6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.768382 5010 scope.go:117] "RemoveContainer" containerID="795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c" Feb 03 10:02:48 crc kubenswrapper[5010]: E0203 10:02:48.769175 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.773589 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.783238 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.783282 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.783305 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.783325 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.783336 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:48Z","lastTransitionTime":"2026-02-03T10:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.787229 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.798560 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.815132 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.815301 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bde7a589-c2e8-48b2-aa06-2fb99731df31-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4vzdl\" (UID: \"bde7a589-c2e8-48b2-aa06-2fb99731df31\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.815361 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bde7a589-c2e8-48b2-aa06-2fb99731df31-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4vzdl\" (UID: \"bde7a589-c2e8-48b2-aa06-2fb99731df31\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.815416 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bde7a589-c2e8-48b2-aa06-2fb99731df31-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4vzdl\" (UID: \"bde7a589-c2e8-48b2-aa06-2fb99731df31\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.815452 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fhp4\" (UniqueName: \"kubernetes.io/projected/bde7a589-c2e8-48b2-aa06-2fb99731df31-kube-api-access-8fhp4\") pod \"ovnkube-control-plane-749d76644c-4vzdl\" (UID: \"bde7a589-c2e8-48b2-aa06-2fb99731df31\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.817808 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bde7a589-c2e8-48b2-aa06-2fb99731df31-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4vzdl\" (UID: \"bde7a589-c2e8-48b2-aa06-2fb99731df31\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.818601 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bde7a589-c2e8-48b2-aa06-2fb99731df31-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4vzdl\" (UID: \"bde7a589-c2e8-48b2-aa06-2fb99731df31\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.824954 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bde7a589-c2e8-48b2-aa06-2fb99731df31-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4vzdl\" (UID: \"bde7a589-c2e8-48b2-aa06-2fb99731df31\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.829699 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.838764 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fhp4\" (UniqueName: \"kubernetes.io/projected/bde7a589-c2e8-48b2-aa06-2fb99731df31-kube-api-access-8fhp4\") pod \"ovnkube-control-plane-749d76644c-4vzdl\" (UID: \"bde7a589-c2e8-48b2-aa06-2fb99731df31\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.843481 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.853352 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.867160 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.883932 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.885837 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.885873 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.885930 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.885953 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.885964 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:48Z","lastTransitionTime":"2026-02-03T10:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.897862 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.912772 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.927166 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.939171 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.953986 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.966316 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.981967 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.985472 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.987911 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.987969 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.987978 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.987994 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.988020 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:48Z","lastTransitionTime":"2026-02-03T10:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.997543 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: W0203 10:02:49.001602 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbde7a589_c2e8_48b2_aa06_2fb99731df31.slice/crio-ccf5d4d7077896db33e5d4cd50a872d9d21364abf54be63cf0c164bb1dc909ac WatchSource:0}: Error finding container ccf5d4d7077896db33e5d4cd50a872d9d21364abf54be63cf0c164bb1dc909ac: Status 404 returned error can't find the container with id ccf5d4d7077896db33e5d4cd50a872d9d21364abf54be63cf0c164bb1dc909ac Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.016497 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:46Z\\\",\\\"message\\\":\\\"534 6343 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 10:02:45.979597 6343 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 10:02:45.979604 6343 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 10:02:45.979608 6343 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 10:02:45.979632 6343 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 10:02:45.979634 6343 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 10:02:45.979638 6343 factory.go:656] Stopping watch factory\\\\nI0203 10:02:45.979653 6343 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 10:02:45.979655 6343 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 10:02:45.979673 6343 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 10:02:45.979691 6343 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 10:02:45.979842 6343 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:47Z\\\",\\\"message\\\":\\\"te:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 10:02:47.545802 6468 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-cvpds\\\\nF0203 10:02:47.545810 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:02:47.545805 6468 obj_retry.go:365] Adding new object: *v1.Pod openshi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.031422 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.044629 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.059519 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.078125 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.090265 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.090490 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.090506 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.090525 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.090539 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:49Z","lastTransitionTime":"2026-02-03T10:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.092483 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.192734 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.192772 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.192787 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.192806 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.192818 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:49Z","lastTransitionTime":"2026-02-03T10:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.295357 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.295397 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.295407 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.295423 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.295434 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:49Z","lastTransitionTime":"2026-02-03T10:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.397555 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.397584 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.397593 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.397606 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.397615 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:49Z","lastTransitionTime":"2026-02-03T10:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.494075 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 20:04:47.817406044 +0000 UTC Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.499664 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.499700 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.499712 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.499728 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.499739 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:49Z","lastTransitionTime":"2026-02-03T10:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.501041 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:49 crc kubenswrapper[5010]: E0203 10:02:49.501144 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.501231 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:49 crc kubenswrapper[5010]: E0203 10:02:49.501390 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.602618 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.602651 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.602660 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.602673 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.602683 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:49Z","lastTransitionTime":"2026-02-03T10:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.705428 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.705481 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.705493 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.705510 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.705522 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:49Z","lastTransitionTime":"2026-02-03T10:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.764632 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-clvdz"] Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.765445 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:49 crc kubenswrapper[5010]: E0203 10:02:49.765747 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.773156 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" event={"ID":"bde7a589-c2e8-48b2-aa06-2fb99731df31","Type":"ContainerStarted","Data":"4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939"} Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.773252 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" event={"ID":"bde7a589-c2e8-48b2-aa06-2fb99731df31","Type":"ContainerStarted","Data":"dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af"} Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.773275 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" event={"ID":"bde7a589-c2e8-48b2-aa06-2fb99731df31","Type":"ContainerStarted","Data":"ccf5d4d7077896db33e5d4cd50a872d9d21364abf54be63cf0c164bb1dc909ac"} Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.775460 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/1.log" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.778994 5010 scope.go:117] "RemoveContainer" containerID="795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c" Feb 03 10:02:49 crc kubenswrapper[5010]: E0203 10:02:49.779166 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.789306 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.802649 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.807921 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.807963 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.807979 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.808002 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.808022 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:49Z","lastTransitionTime":"2026-02-03T10:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.819539 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.827633 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rrj5\" (UniqueName: \"kubernetes.io/projected/081d0234-b506-49ff-81c9-c535f6e1c588-kube-api-access-6rrj5\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.827802 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.832111 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.844471 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.861864 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.872945 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.888361 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.903055 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.910915 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.910965 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.910977 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.910996 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.911008 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:49Z","lastTransitionTime":"2026-02-03T10:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.922996 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:46Z\\\",\\\"message\\\":\\\"534 6343 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 10:02:45.979597 6343 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 10:02:45.979604 6343 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 10:02:45.979608 6343 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 10:02:45.979632 6343 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 10:02:45.979634 6343 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 10:02:45.979638 6343 factory.go:656] Stopping watch factory\\\\nI0203 10:02:45.979653 6343 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 10:02:45.979655 6343 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 10:02:45.979673 6343 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 10:02:45.979691 6343 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 10:02:45.979842 6343 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:47Z\\\",\\\"message\\\":\\\"te:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 10:02:47.545802 6468 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-cvpds\\\\nF0203 10:02:47.545810 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:02:47.545805 6468 obj_retry.go:365] Adding new object: *v1.Pod openshi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.929909 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.930017 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rrj5\" (UniqueName: \"kubernetes.io/projected/081d0234-b506-49ff-81c9-c535f6e1c588-kube-api-access-6rrj5\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:49 crc kubenswrapper[5010]: E0203 10:02:49.930521 5010 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:02:49 crc kubenswrapper[5010]: E0203 10:02:49.930630 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs podName:081d0234-b506-49ff-81c9-c535f6e1c588 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:50.430607047 +0000 UTC m=+40.586583176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs") pod "network-metrics-daemon-clvdz" (UID: "081d0234-b506-49ff-81c9-c535f6e1c588") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.936064 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.947906 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.954307 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rrj5\" (UniqueName: \"kubernetes.io/projected/081d0234-b506-49ff-81c9-c535f6e1c588-kube-api-access-6rrj5\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.963034 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.975328 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.987260 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.996285 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.007187 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.013667 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.013706 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.013715 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.013730 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.013740 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:50Z","lastTransitionTime":"2026-02-03T10:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.017068 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.029143 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.042046 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.056125 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.073010 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.084541 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.094126 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.104716 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.115972 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.116009 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.116018 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.116034 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.116052 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:50Z","lastTransitionTime":"2026-02-03T10:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.118314 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.131273 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.142267 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.159304 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:47Z\\\",\\\"message\\\":\\\"te:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 10:02:47.545802 6468 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-cvpds\\\\nF0203 10:02:47.545810 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:02:47.545805 6468 obj_retry.go:365] Adding new object: *v1.Pod openshi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.169940 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.182208 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.193608 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.218582 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.218627 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.218638 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.218653 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.218661 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:50Z","lastTransitionTime":"2026-02-03T10:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.321280 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.321392 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.321407 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.321423 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.321437 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:50Z","lastTransitionTime":"2026-02-03T10:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.423612 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.423654 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.423666 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.423686 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.423698 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:50Z","lastTransitionTime":"2026-02-03T10:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.434126 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:50 crc kubenswrapper[5010]: E0203 10:02:50.434299 5010 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:02:50 crc kubenswrapper[5010]: E0203 10:02:50.434373 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs podName:081d0234-b506-49ff-81c9-c535f6e1c588 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:51.434352458 +0000 UTC m=+41.590328597 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs") pod "network-metrics-daemon-clvdz" (UID: "081d0234-b506-49ff-81c9-c535f6e1c588") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.494779 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 01:39:18.786681866 +0000 UTC Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.501309 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:50 crc kubenswrapper[5010]: E0203 10:02:50.501483 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.520525 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.526093 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.526143 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.526160 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.526184 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.526201 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:50Z","lastTransitionTime":"2026-02-03T10:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.539553 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.554528 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.563916 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.575736 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.587850 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.605760 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:47Z\\\",\\\"message\\\":\\\"te:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 10:02:47.545802 6468 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-cvpds\\\\nF0203 10:02:47.545810 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:02:47.545805 6468 obj_retry.go:365] Adding new object: *v1.Pod openshi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.616387 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.628098 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.628127 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.628138 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.628154 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.628165 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:50Z","lastTransitionTime":"2026-02-03T10:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.628431 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.638439 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.648120 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.658544 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.667527 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.678496 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.687641 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.700653 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.730461 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.730496 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.730508 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.730531 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.730556 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:50Z","lastTransitionTime":"2026-02-03T10:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.832749 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.832838 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.832857 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.832877 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.832892 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:50Z","lastTransitionTime":"2026-02-03T10:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.935358 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.935402 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.935414 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.935430 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.935443 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:50Z","lastTransitionTime":"2026-02-03T10:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.037587 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.037620 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.037631 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.037643 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.037655 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:51Z","lastTransitionTime":"2026-02-03T10:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.140082 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.140107 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.140115 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.140129 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.140140 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:51Z","lastTransitionTime":"2026-02-03T10:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.242407 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.242439 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.242448 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.242463 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.242472 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:51Z","lastTransitionTime":"2026-02-03T10:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.344338 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.344376 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.344385 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.344399 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.344409 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:51Z","lastTransitionTime":"2026-02-03T10:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.444281 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:51 crc kubenswrapper[5010]: E0203 10:02:51.444507 5010 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:02:51 crc kubenswrapper[5010]: E0203 10:02:51.444628 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs podName:081d0234-b506-49ff-81c9-c535f6e1c588 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:53.444599159 +0000 UTC m=+43.600575328 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs") pod "network-metrics-daemon-clvdz" (UID: "081d0234-b506-49ff-81c9-c535f6e1c588") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.446745 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.446788 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.446807 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.446823 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.446834 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:51Z","lastTransitionTime":"2026-02-03T10:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.495744 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 22:53:50.514232136 +0000 UTC Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.502138 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.502265 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.502280 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:51 crc kubenswrapper[5010]: E0203 10:02:51.502370 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:51 crc kubenswrapper[5010]: E0203 10:02:51.502479 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:02:51 crc kubenswrapper[5010]: E0203 10:02:51.502567 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.550114 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.550349 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.550451 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.550520 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.550582 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:51Z","lastTransitionTime":"2026-02-03T10:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.652970 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.653054 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.653083 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.653110 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.653133 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:51Z","lastTransitionTime":"2026-02-03T10:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.756669 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.756801 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.756831 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.756862 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.756883 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:51Z","lastTransitionTime":"2026-02-03T10:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.859627 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.859677 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.859688 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.859705 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.859720 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:51Z","lastTransitionTime":"2026-02-03T10:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:51.962852 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:51.962887 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:51.962899 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:51.962917 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:51.962930 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:51Z","lastTransitionTime":"2026-02-03T10:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.066189 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.066238 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.066271 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.066286 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.066299 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:52Z","lastTransitionTime":"2026-02-03T10:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.168929 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.169006 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.169021 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.169045 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.169059 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:52Z","lastTransitionTime":"2026-02-03T10:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.271449 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.271499 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.271511 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.271529 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.271540 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:52Z","lastTransitionTime":"2026-02-03T10:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.373778 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.373831 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.373848 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.373865 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.373895 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:52Z","lastTransitionTime":"2026-02-03T10:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.476254 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.476294 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.476304 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.476319 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.476329 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:52Z","lastTransitionTime":"2026-02-03T10:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.496377 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 18:34:57.913122011 +0000 UTC Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.501673 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:52 crc kubenswrapper[5010]: E0203 10:02:52.501831 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.579387 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.579466 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.579488 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.579517 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.579541 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:52Z","lastTransitionTime":"2026-02-03T10:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.682325 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.682371 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.682388 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.682407 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.682421 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:52Z","lastTransitionTime":"2026-02-03T10:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.784854 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.784927 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.784949 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.784977 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.785000 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:52Z","lastTransitionTime":"2026-02-03T10:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.887450 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.887803 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.887943 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.888074 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.888240 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:52Z","lastTransitionTime":"2026-02-03T10:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.990892 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.990927 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.990935 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.990948 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.990957 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:52Z","lastTransitionTime":"2026-02-03T10:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.093847 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.093879 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.093887 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.093900 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.093908 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:53Z","lastTransitionTime":"2026-02-03T10:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.196885 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.197343 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.197575 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.197729 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.197855 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:53Z","lastTransitionTime":"2026-02-03T10:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.300885 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.301176 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.301307 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.301455 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.301548 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:53Z","lastTransitionTime":"2026-02-03T10:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.403915 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.403969 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.403982 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.403998 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.404011 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:53Z","lastTransitionTime":"2026-02-03T10:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.470549 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:53 crc kubenswrapper[5010]: E0203 10:02:53.470681 5010 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:02:53 crc kubenswrapper[5010]: E0203 10:02:53.470751 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs podName:081d0234-b506-49ff-81c9-c535f6e1c588 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:57.470730514 +0000 UTC m=+47.626706643 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs") pod "network-metrics-daemon-clvdz" (UID: "081d0234-b506-49ff-81c9-c535f6e1c588") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.496506 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 22:17:10.872075212 +0000 UTC Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.501439 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:53 crc kubenswrapper[5010]: E0203 10:02:53.501589 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.501465 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:53 crc kubenswrapper[5010]: E0203 10:02:53.501684 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.501446 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:53 crc kubenswrapper[5010]: E0203 10:02:53.501777 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.506623 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.506666 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.506678 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.506694 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.506706 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:53Z","lastTransitionTime":"2026-02-03T10:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.609553 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.609602 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.609613 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.609631 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.609643 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:53Z","lastTransitionTime":"2026-02-03T10:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.712339 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.712377 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.712386 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.712399 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.712409 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:53Z","lastTransitionTime":"2026-02-03T10:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.814946 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.814996 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.815009 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.815027 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.815038 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:53Z","lastTransitionTime":"2026-02-03T10:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.918464 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.918683 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.918729 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.918835 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.918869 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:53Z","lastTransitionTime":"2026-02-03T10:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.022481 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.022553 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.022578 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.022608 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.022634 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:54Z","lastTransitionTime":"2026-02-03T10:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.125564 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.125633 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.125652 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.125677 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.125695 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:54Z","lastTransitionTime":"2026-02-03T10:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.228398 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.228472 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.228493 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.228521 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.228538 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:54Z","lastTransitionTime":"2026-02-03T10:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.331997 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.332048 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.332060 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.332077 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.332089 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:54Z","lastTransitionTime":"2026-02-03T10:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.434496 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.434573 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.434592 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.434618 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.434635 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:54Z","lastTransitionTime":"2026-02-03T10:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.497035 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 22:14:58.570133307 +0000 UTC Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.501585 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:54 crc kubenswrapper[5010]: E0203 10:02:54.501763 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.536828 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.536868 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.536876 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.536890 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.536899 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:54Z","lastTransitionTime":"2026-02-03T10:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.639604 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.639708 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.639725 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.639755 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.639772 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:54Z","lastTransitionTime":"2026-02-03T10:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.741931 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.741976 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.741993 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.742008 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.742020 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:54Z","lastTransitionTime":"2026-02-03T10:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.844391 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.844422 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.844432 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.844444 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.844453 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:54Z","lastTransitionTime":"2026-02-03T10:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.947463 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.947621 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.947647 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.947676 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.947697 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:54Z","lastTransitionTime":"2026-02-03T10:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.042146 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.043653 5010 scope.go:117] "RemoveContainer" containerID="795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c" Feb 03 10:02:55 crc kubenswrapper[5010]: E0203 10:02:55.044016 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.051045 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.051102 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.051119 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.051142 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.051160 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:55Z","lastTransitionTime":"2026-02-03T10:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.154173 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.154277 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.154304 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.154333 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.154355 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:55Z","lastTransitionTime":"2026-02-03T10:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.257125 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.257184 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.257199 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.257237 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.257254 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:55Z","lastTransitionTime":"2026-02-03T10:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.359748 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.360004 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.360016 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.360032 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.360043 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:55Z","lastTransitionTime":"2026-02-03T10:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.462919 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.462954 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.462966 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.462981 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.462996 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:55Z","lastTransitionTime":"2026-02-03T10:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.497397 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 14:11:13.933212854 +0000 UTC Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.501581 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.501634 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:55 crc kubenswrapper[5010]: E0203 10:02:55.501686 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.501581 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:55 crc kubenswrapper[5010]: E0203 10:02:55.501762 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:55 crc kubenswrapper[5010]: E0203 10:02:55.502081 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.565854 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.565905 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.565922 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.565938 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.565948 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:55Z","lastTransitionTime":"2026-02-03T10:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.667762 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.668153 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.668187 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.668247 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.668272 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:55Z","lastTransitionTime":"2026-02-03T10:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.771027 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.771062 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.771070 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.771085 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.771094 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:55Z","lastTransitionTime":"2026-02-03T10:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.873964 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.874006 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.874016 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.874033 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.874046 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:55Z","lastTransitionTime":"2026-02-03T10:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.976641 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.976692 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.976706 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.976726 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.976739 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:55Z","lastTransitionTime":"2026-02-03T10:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.079455 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.079494 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.079503 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.079517 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.079526 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.181698 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.181771 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.181790 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.181815 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.181833 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.284139 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.284183 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.284194 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.284229 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.284242 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.352441 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.352498 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.352513 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.352530 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.352542 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: E0203 10:02:56.369329 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:56Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.373427 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.373480 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.373490 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.373503 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.373513 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: E0203 10:02:56.387238 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:56Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.390862 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.390888 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.390897 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.390911 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.390921 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: E0203 10:02:56.404263 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:56Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.408376 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.408431 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.408444 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.408462 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.408475 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: E0203 10:02:56.421556 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:56Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.424863 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.424911 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.424922 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.424939 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.424950 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: E0203 10:02:56.438809 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:56Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:56 crc kubenswrapper[5010]: E0203 10:02:56.438967 5010 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.440493 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.440526 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.440537 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.440552 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.440565 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.497948 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:30:53.718979174 +0000 UTC Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.501431 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:56 crc kubenswrapper[5010]: E0203 10:02:56.501581 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.543105 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.543149 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.543163 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.543182 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.543196 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.644966 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.645016 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.645029 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.645045 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.645058 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.747058 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.747109 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.747123 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.747141 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.747153 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.849719 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.849761 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.849773 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.849789 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.849803 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.952716 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.952773 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.952784 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.952802 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.952812 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.055146 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.055190 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.055201 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.055244 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.055257 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:57Z","lastTransitionTime":"2026-02-03T10:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.157448 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.157491 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.157503 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.157518 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.157530 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:57Z","lastTransitionTime":"2026-02-03T10:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.259947 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.260069 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.260085 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.260104 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.260117 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:57Z","lastTransitionTime":"2026-02-03T10:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.362494 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.362541 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.362551 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.362567 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.362577 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:57Z","lastTransitionTime":"2026-02-03T10:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.465631 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.465683 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.465696 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.465714 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.465725 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:57Z","lastTransitionTime":"2026-02-03T10:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.498116 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 18:36:19.853027394 +0000 UTC Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.501593 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.501593 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:57 crc kubenswrapper[5010]: E0203 10:02:57.501953 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:57 crc kubenswrapper[5010]: E0203 10:02:57.501755 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.501588 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:57 crc kubenswrapper[5010]: E0203 10:02:57.502068 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.517444 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:57 crc kubenswrapper[5010]: E0203 10:02:57.517624 5010 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:02:57 crc kubenswrapper[5010]: E0203 10:02:57.517700 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs podName:081d0234-b506-49ff-81c9-c535f6e1c588 nodeName:}" failed. No retries permitted until 2026-02-03 10:03:05.517679629 +0000 UTC m=+55.673655768 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs") pod "network-metrics-daemon-clvdz" (UID: "081d0234-b506-49ff-81c9-c535f6e1c588") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.567563 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.567607 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.567615 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.567628 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.567637 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:57Z","lastTransitionTime":"2026-02-03T10:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.669793 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.669828 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.669841 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.669856 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.669869 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:57Z","lastTransitionTime":"2026-02-03T10:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.773084 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.773147 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.773160 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.773176 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.773187 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:57Z","lastTransitionTime":"2026-02-03T10:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.875171 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.875235 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.875250 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.875269 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.875281 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:57Z","lastTransitionTime":"2026-02-03T10:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.978145 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.978287 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.978311 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.978340 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.978360 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:57Z","lastTransitionTime":"2026-02-03T10:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.081392 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.081457 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.081497 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.081526 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.081548 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:58Z","lastTransitionTime":"2026-02-03T10:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.185021 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.185069 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.185085 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.185104 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.185119 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:58Z","lastTransitionTime":"2026-02-03T10:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.288266 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.288744 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.288977 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.289188 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.289438 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:58Z","lastTransitionTime":"2026-02-03T10:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.392838 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.392868 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.392877 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.392891 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.392901 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:58Z","lastTransitionTime":"2026-02-03T10:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.495864 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.495952 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.496021 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.496088 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.496112 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:58Z","lastTransitionTime":"2026-02-03T10:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.499172 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:38:11.686770027 +0000 UTC Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.501519 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:58 crc kubenswrapper[5010]: E0203 10:02:58.501655 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.598379 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.598448 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.598464 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.598480 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.598492 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:58Z","lastTransitionTime":"2026-02-03T10:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.701844 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.701898 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.701916 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.701945 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.701964 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:58Z","lastTransitionTime":"2026-02-03T10:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.805948 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.805988 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.805997 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.806011 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.806020 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:58Z","lastTransitionTime":"2026-02-03T10:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.907696 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.907756 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.907770 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.907818 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.907836 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:58Z","lastTransitionTime":"2026-02-03T10:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.010284 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.010346 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.010361 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.010385 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.010400 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:59Z","lastTransitionTime":"2026-02-03T10:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.113428 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.113472 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.113482 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.113496 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.113506 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:59Z","lastTransitionTime":"2026-02-03T10:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.215994 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.216064 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.216088 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.216568 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.216625 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:59Z","lastTransitionTime":"2026-02-03T10:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.319547 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.319590 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.319600 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.319614 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.319626 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:59Z","lastTransitionTime":"2026-02-03T10:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.321979 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.331399 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.339250 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.352073 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.362876 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.374899 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.388728 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.403507 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.421851 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.422105 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.422203 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.422318 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.422435 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:59Z","lastTransitionTime":"2026-02-03T10:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.422827 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:47Z\\\",\\\"message\\\":\\\"te:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 10:02:47.545802 6468 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-cvpds\\\\nF0203 10:02:47.545810 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:02:47.545805 6468 obj_retry.go:365] Adding new object: *v1.Pod openshi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.432167 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.446137 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.457748 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.470380 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.482598 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.492252 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.499596 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 06:20:39.86337079 +0000 UTC Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.502138 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.502155 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.502188 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:59 crc kubenswrapper[5010]: E0203 10:02:59.502769 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.502426 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: E0203 10:02:59.502779 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:02:59 crc kubenswrapper[5010]: E0203 10:02:59.502361 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.516864 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.525078 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.525198 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.525286 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.525360 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.525437 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:59Z","lastTransitionTime":"2026-02-03T10:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.528782 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.627975 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.628061 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.628084 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.628113 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.628135 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:59Z","lastTransitionTime":"2026-02-03T10:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.731954 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.732035 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.732058 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.732080 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.732094 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:59Z","lastTransitionTime":"2026-02-03T10:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.835306 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.835381 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.835392 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.835413 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.835429 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:59Z","lastTransitionTime":"2026-02-03T10:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.952910 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.952980 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.953003 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.953031 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.953058 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:59Z","lastTransitionTime":"2026-02-03T10:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.055592 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.055664 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.055689 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.055718 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.055779 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:00Z","lastTransitionTime":"2026-02-03T10:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.158056 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.158099 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.158110 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.158126 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.158137 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:00Z","lastTransitionTime":"2026-02-03T10:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.261253 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.261344 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.261383 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.261415 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.261434 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:00Z","lastTransitionTime":"2026-02-03T10:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.364291 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.364343 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.364360 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.364382 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.364397 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:00Z","lastTransitionTime":"2026-02-03T10:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.467485 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.467527 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.467535 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.467555 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.467567 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:00Z","lastTransitionTime":"2026-02-03T10:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.499843 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 05:51:20.099977059 +0000 UTC Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.501268 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:00 crc kubenswrapper[5010]: E0203 10:03:00.501441 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.514739 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.528884 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.545257 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.558827 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.570473 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.570525 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.570536 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.570553 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.570568 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:00Z","lastTransitionTime":"2026-02-03T10:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.573198 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.583517 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.597675 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.613984 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.626201 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.637730 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.650424 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.661772 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.673455 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.673504 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.673515 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.673531 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.673541 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:00Z","lastTransitionTime":"2026-02-03T10:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.680163 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:47Z\\\",\\\"message\\\":\\\"te:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 10:02:47.545802 6468 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-cvpds\\\\nF0203 10:02:47.545810 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:02:47.545805 6468 obj_retry.go:365] Adding new object: *v1.Pod openshi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.691742 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.702911 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.717987 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.733247 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.775871 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.775915 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.775925 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.775942 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.775951 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:00Z","lastTransitionTime":"2026-02-03T10:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.878687 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.878743 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.878756 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.878775 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.878788 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:00Z","lastTransitionTime":"2026-02-03T10:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.982109 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.982158 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.982173 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.982188 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.982201 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:00Z","lastTransitionTime":"2026-02-03T10:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.083946 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.083983 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.083995 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.084010 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.084020 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:01Z","lastTransitionTime":"2026-02-03T10:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.186279 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.186317 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.186327 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.186342 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.186372 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:01Z","lastTransitionTime":"2026-02-03T10:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.266466 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.266597 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.266626 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.266689 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:03:33.26667354 +0000 UTC m=+83.422649669 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.266753 5010 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.266834 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:03:33.266811363 +0000 UTC m=+83.422787542 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.267092 5010 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.267150 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:03:33.267138942 +0000 UTC m=+83.423115121 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.288970 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.289005 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.289013 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.289026 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.289036 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:01Z","lastTransitionTime":"2026-02-03T10:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.367141 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.367490 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.367426 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.367752 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.367838 5010 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.367600 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.368015 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.368061 5010 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.368191 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 10:03:33.368038624 +0000 UTC m=+83.524014753 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.368333 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 10:03:33.368319881 +0000 UTC m=+83.524296200 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.392086 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.392151 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.392163 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.392181 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.392192 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:01Z","lastTransitionTime":"2026-02-03T10:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.494487 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.494522 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.494536 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.494550 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.494561 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:01Z","lastTransitionTime":"2026-02-03T10:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.500035 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 19:14:42.979440186 +0000 UTC Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.501358 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.501404 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.501422 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.501490 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.501584 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.501654 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.597166 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.597233 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.597244 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.597259 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.597269 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:01Z","lastTransitionTime":"2026-02-03T10:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.699599 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.699631 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.699639 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.699651 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.699661 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:01Z","lastTransitionTime":"2026-02-03T10:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.801451 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.801488 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.801500 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.801513 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.801523 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:01Z","lastTransitionTime":"2026-02-03T10:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.904264 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.904337 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.904352 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.904377 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.904396 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:01Z","lastTransitionTime":"2026-02-03T10:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.012617 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.012676 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.012692 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.012716 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.012754 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:02Z","lastTransitionTime":"2026-02-03T10:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.116371 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.116430 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.116448 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.116471 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.116490 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:02Z","lastTransitionTime":"2026-02-03T10:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.219357 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.219393 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.219405 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.219421 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.219432 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:02Z","lastTransitionTime":"2026-02-03T10:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.322393 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.322468 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.322518 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.322539 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.322548 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:02Z","lastTransitionTime":"2026-02-03T10:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.425423 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.425456 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.425466 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.425482 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.425492 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:02Z","lastTransitionTime":"2026-02-03T10:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.500875 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 05:35:42.910893277 +0000 UTC Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.501427 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:02 crc kubenswrapper[5010]: E0203 10:03:02.501559 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.527302 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.527337 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.527345 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.527357 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.527366 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:02Z","lastTransitionTime":"2026-02-03T10:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.629696 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.629943 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.630048 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.630135 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.630205 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:02Z","lastTransitionTime":"2026-02-03T10:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.733199 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.733281 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.733301 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.733326 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.733349 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:02Z","lastTransitionTime":"2026-02-03T10:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.836286 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.836318 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.836326 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.836349 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.836366 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:02Z","lastTransitionTime":"2026-02-03T10:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.938722 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.939082 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.939202 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.939345 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.939496 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:02Z","lastTransitionTime":"2026-02-03T10:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.044095 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.044136 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.044145 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.044158 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.044167 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:03Z","lastTransitionTime":"2026-02-03T10:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.146475 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.146504 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.146513 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.146525 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.146535 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:03Z","lastTransitionTime":"2026-02-03T10:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.248495 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.248529 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.248541 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.248556 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.248567 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:03Z","lastTransitionTime":"2026-02-03T10:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.351819 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.351870 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.351887 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.351908 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.351924 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:03Z","lastTransitionTime":"2026-02-03T10:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.454545 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.454590 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.454600 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.454616 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.454627 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:03Z","lastTransitionTime":"2026-02-03T10:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.501394 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 10:45:52.144796794 +0000 UTC Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.501633 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.501720 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:03 crc kubenswrapper[5010]: E0203 10:03:03.501804 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.501847 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:03 crc kubenswrapper[5010]: E0203 10:03:03.502000 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:03 crc kubenswrapper[5010]: E0203 10:03:03.502089 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.556916 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.556951 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.556959 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.556974 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.556983 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:03Z","lastTransitionTime":"2026-02-03T10:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.659388 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.659429 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.659440 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.659454 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.659463 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:03Z","lastTransitionTime":"2026-02-03T10:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.762037 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.762072 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.762080 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.762093 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.762103 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:03Z","lastTransitionTime":"2026-02-03T10:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.864631 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.864673 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.864685 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.864702 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.864714 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:03Z","lastTransitionTime":"2026-02-03T10:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.968028 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.968180 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.968209 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.968273 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.968300 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:03Z","lastTransitionTime":"2026-02-03T10:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.070734 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.070776 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.070790 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.070814 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.070827 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:04Z","lastTransitionTime":"2026-02-03T10:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.172958 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.173004 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.173014 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.173030 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.173040 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:04Z","lastTransitionTime":"2026-02-03T10:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.275787 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.275840 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.275850 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.275868 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.276186 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:04Z","lastTransitionTime":"2026-02-03T10:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.378731 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.378770 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.378778 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.378792 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.378802 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:04Z","lastTransitionTime":"2026-02-03T10:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.481351 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.481384 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.481393 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.481406 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.481414 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:04Z","lastTransitionTime":"2026-02-03T10:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.501146 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:04 crc kubenswrapper[5010]: E0203 10:03:04.501281 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.501729 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 19:31:04.641673913 +0000 UTC Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.584605 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.584696 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.584710 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.584731 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.584744 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:04Z","lastTransitionTime":"2026-02-03T10:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.688230 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.688281 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.688292 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.688309 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.688323 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:04Z","lastTransitionTime":"2026-02-03T10:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.791296 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.791367 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.791385 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.791405 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.791421 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:04Z","lastTransitionTime":"2026-02-03T10:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.893607 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.893669 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.893686 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.893707 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.893719 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:04Z","lastTransitionTime":"2026-02-03T10:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.996537 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.996600 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.996621 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.996649 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.996670 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:04Z","lastTransitionTime":"2026-02-03T10:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.099403 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.099481 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.099503 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.099533 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.099554 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:05Z","lastTransitionTime":"2026-02-03T10:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.201526 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.201589 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.201613 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.201641 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.201662 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:05Z","lastTransitionTime":"2026-02-03T10:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.304033 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.304102 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.304117 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.304134 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.304149 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:05Z","lastTransitionTime":"2026-02-03T10:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.407308 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.407373 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.407385 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.407405 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.407417 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:05Z","lastTransitionTime":"2026-02-03T10:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.501622 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:05 crc kubenswrapper[5010]: E0203 10:03:05.501864 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.501670 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.501648 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:05 crc kubenswrapper[5010]: E0203 10:03:05.501976 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.502046 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 00:09:58.216972682 +0000 UTC Feb 03 10:03:05 crc kubenswrapper[5010]: E0203 10:03:05.502268 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.509692 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.509736 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.509750 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.509768 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.509781 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:05Z","lastTransitionTime":"2026-02-03T10:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.612489 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.612519 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:05 crc kubenswrapper[5010]: E0203 10:03:05.612670 5010 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.612688 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.612707 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:05 crc kubenswrapper[5010]: E0203 10:03:05.612722 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs podName:081d0234-b506-49ff-81c9-c535f6e1c588 nodeName:}" failed. No retries permitted until 2026-02-03 10:03:21.612705969 +0000 UTC m=+71.768682108 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs") pod "network-metrics-daemon-clvdz" (UID: "081d0234-b506-49ff-81c9-c535f6e1c588") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.612728 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.612744 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:05Z","lastTransitionTime":"2026-02-03T10:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.715990 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.716035 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.716051 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.716070 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.716084 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:05Z","lastTransitionTime":"2026-02-03T10:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.818577 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.818618 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.818627 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.818641 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.818651 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:05Z","lastTransitionTime":"2026-02-03T10:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.920481 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.920519 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.920527 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.920541 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.920550 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:05Z","lastTransitionTime":"2026-02-03T10:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.022587 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.022628 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.022640 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.022654 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.022667 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.124796 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.124824 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.124832 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.124844 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.124854 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.230724 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.230790 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.230807 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.230831 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.230849 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.333078 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.333130 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.333142 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.333158 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.333180 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.435974 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.436053 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.436077 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.436105 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.436126 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.454653 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.454702 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.454723 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.454744 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.454758 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: E0203 10:03:06.468814 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:06Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.474007 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.474059 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.474069 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.474084 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.474100 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: E0203 10:03:06.490475 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:06Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.495670 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.495739 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.495755 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.495776 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.495790 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.501418 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:06 crc kubenswrapper[5010]: E0203 10:03:06.501565 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.503168 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 07:14:18.905684808 +0000 UTC Feb 03 10:03:06 crc kubenswrapper[5010]: E0203 10:03:06.512028 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:06Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.516123 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.516163 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.516177 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.516197 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.516231 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: E0203 10:03:06.530751 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:06Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.534190 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.534243 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.534260 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.534280 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.534292 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: E0203 10:03:06.545439 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:06Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:06 crc kubenswrapper[5010]: E0203 10:03:06.545616 5010 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.551888 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.551938 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.551948 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.551967 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.551980 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.654690 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.654744 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.654757 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.654771 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.654781 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.757845 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.757915 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.757938 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.757967 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.757990 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.860562 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.860605 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.860615 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.860627 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.860637 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.963187 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.963262 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.963274 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.963290 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.963300 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.065590 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.065643 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.065663 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.065690 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.065707 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:07Z","lastTransitionTime":"2026-02-03T10:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.168627 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.168672 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.168681 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.168695 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.168709 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:07Z","lastTransitionTime":"2026-02-03T10:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.271146 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.271183 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.271195 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.271211 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.271267 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:07Z","lastTransitionTime":"2026-02-03T10:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.373663 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.373711 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.373722 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.373739 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.373753 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:07Z","lastTransitionTime":"2026-02-03T10:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.475920 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.475979 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.475995 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.476015 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.476029 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:07Z","lastTransitionTime":"2026-02-03T10:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.501128 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.501169 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:07 crc kubenswrapper[5010]: E0203 10:03:07.501310 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.501386 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:07 crc kubenswrapper[5010]: E0203 10:03:07.501653 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:07 crc kubenswrapper[5010]: E0203 10:03:07.501907 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.501934 5010 scope.go:117] "RemoveContainer" containerID="795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.504021 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 02:48:01.012441195 +0000 UTC Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.578272 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.578602 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.578612 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.578628 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.578637 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:07Z","lastTransitionTime":"2026-02-03T10:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.681121 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.681153 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.681161 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.681174 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.681182 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:07Z","lastTransitionTime":"2026-02-03T10:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.783708 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.783752 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.783763 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.783779 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.783790 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:07Z","lastTransitionTime":"2026-02-03T10:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.833525 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/1.log" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.835732 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3"} Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.836637 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.852541 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:07Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.864667 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:07Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.886355 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.886391 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.886400 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.886445 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.886456 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:07Z","lastTransitionTime":"2026-02-03T10:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.902634 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:47Z\\\",\\\"message\\\":\\\"te:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 10:02:47.545802 6468 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-cvpds\\\\nF0203 10:02:47.545810 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:02:47.545805 6468 obj_retry.go:365] Adding new object: *v1.Pod openshi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:07Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.922083 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:07Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.937864 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:07Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.950256 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:07Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.963255 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:07Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.974066 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:07Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.989588 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.989642 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.989656 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.989676 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.989691 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:07Z","lastTransitionTime":"2026-02-03T10:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.992523 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:07Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.005998 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.016261 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.028244 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.043730 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.060734 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.076788 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.092135 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.092825 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.092866 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.092877 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.092894 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.092904 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:08Z","lastTransitionTime":"2026-02-03T10:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.108669 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.195104 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.195142 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.195153 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.195166 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.195175 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:08Z","lastTransitionTime":"2026-02-03T10:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.298234 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.298334 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.298355 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.298383 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.298400 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:08Z","lastTransitionTime":"2026-02-03T10:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.400927 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.400969 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.400981 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.400996 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.401005 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:08Z","lastTransitionTime":"2026-02-03T10:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.501687 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:08 crc kubenswrapper[5010]: E0203 10:03:08.501818 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.503033 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.503083 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.503105 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.503135 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.503157 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:08Z","lastTransitionTime":"2026-02-03T10:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.504299 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 13:18:44.162028586 +0000 UTC Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.606358 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.606428 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.606445 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.606460 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.606470 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:08Z","lastTransitionTime":"2026-02-03T10:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.709889 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.709966 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.710002 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.710037 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.710062 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:08Z","lastTransitionTime":"2026-02-03T10:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.813043 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.813092 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.813105 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.813123 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.813137 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:08Z","lastTransitionTime":"2026-02-03T10:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.842541 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/2.log" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.843152 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/1.log" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.846520 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3" exitCode=1 Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.846569 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3"} Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.846649 5010 scope.go:117] "RemoveContainer" containerID="795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.848034 5010 scope.go:117] "RemoveContainer" containerID="2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3" Feb 03 10:03:08 crc kubenswrapper[5010]: E0203 10:03:08.848379 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.870928 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.890042 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.907647 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.915309 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.915348 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.915360 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.915380 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.915394 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:08Z","lastTransitionTime":"2026-02-03T10:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.921976 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.937523 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.950534 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.966429 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.981291 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.001093 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:47Z\\\",\\\"message\\\":\\\"te:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 10:02:47.545802 6468 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-cvpds\\\\nF0203 10:02:47.545810 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:02:47.545805 6468 obj_retry.go:365] Adding new object: *v1.Pod openshi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:08Z\\\",\\\"message\\\":\\\" Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 10:03:08.319356 6739 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:03:08.319342 6739 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.012016 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.017614 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.017647 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.017657 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.017671 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.017680 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:09Z","lastTransitionTime":"2026-02-03T10:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.027620 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.040857 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.058982 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.071832 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.084589 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.098012 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.134468 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.134518 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.134529 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.134545 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.134557 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:09Z","lastTransitionTime":"2026-02-03T10:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.137336 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.238269 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.238332 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.238353 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.238401 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.238426 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:09Z","lastTransitionTime":"2026-02-03T10:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.341552 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.341685 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.341695 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.341713 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.341725 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:09Z","lastTransitionTime":"2026-02-03T10:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.444489 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.444524 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.444535 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.444551 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.444563 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:09Z","lastTransitionTime":"2026-02-03T10:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.501971 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.501992 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:09 crc kubenswrapper[5010]: E0203 10:03:09.502122 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:09 crc kubenswrapper[5010]: E0203 10:03:09.502256 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.501995 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:09 crc kubenswrapper[5010]: E0203 10:03:09.502362 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.505124 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 15:44:05.030942135 +0000 UTC Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.546967 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.547003 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.547035 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.547052 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.547063 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:09Z","lastTransitionTime":"2026-02-03T10:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.649346 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.649406 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.649420 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.649437 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.649450 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:09Z","lastTransitionTime":"2026-02-03T10:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.752360 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.752402 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.752414 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.752428 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.752439 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:09Z","lastTransitionTime":"2026-02-03T10:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.851707 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/2.log" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.854014 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.854044 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.854056 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.854070 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.854079 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:09Z","lastTransitionTime":"2026-02-03T10:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.856365 5010 scope.go:117] "RemoveContainer" containerID="2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3" Feb 03 10:03:09 crc kubenswrapper[5010]: E0203 10:03:09.856562 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.872339 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.885704 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.905161 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.918663 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.937013 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:08Z\\\",\\\"message\\\":\\\" Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 10:03:08.319356 6739 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:03:08.319342 6739 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.950925 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.956191 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.956244 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.956254 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.956269 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.956279 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:09Z","lastTransitionTime":"2026-02-03T10:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.967428 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.984878 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.996756 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.008917 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.018436 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.030298 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.040945 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.052484 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.058900 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.058940 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.058950 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.058969 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.058983 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:10Z","lastTransitionTime":"2026-02-03T10:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.063549 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.076022 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.086617 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.162314 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.162401 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.162439 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.162465 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.162484 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:10Z","lastTransitionTime":"2026-02-03T10:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.264809 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.264837 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.264846 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.264859 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.264867 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:10Z","lastTransitionTime":"2026-02-03T10:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.367316 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.367410 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.367447 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.367478 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.367504 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:10Z","lastTransitionTime":"2026-02-03T10:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.469183 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.469239 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.469248 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.469262 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.469271 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:10Z","lastTransitionTime":"2026-02-03T10:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.501969 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:10 crc kubenswrapper[5010]: E0203 10:03:10.502113 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.505770 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 04:52:17.561889668 +0000 UTC Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.515285 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.534054 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.545678 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.556811 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.568305 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.572525 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.572575 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.572586 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.572603 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.572612 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:10Z","lastTransitionTime":"2026-02-03T10:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.582355 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.598854 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.612765 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.628735 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.640891 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.655764 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.671091 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.675137 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.675256 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.675274 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.675326 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.675346 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:10Z","lastTransitionTime":"2026-02-03T10:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.685194 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.701422 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.713715 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.733039 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:08Z\\\",\\\"message\\\":\\\" Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 10:03:08.319356 6739 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:03:08.319342 6739 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.747642 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.778282 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.778316 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.778326 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.778340 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.778350 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:10Z","lastTransitionTime":"2026-02-03T10:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.880598 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.880652 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.880670 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.880693 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.880709 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:10Z","lastTransitionTime":"2026-02-03T10:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.983302 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.983351 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.983363 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.983379 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.983393 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:10Z","lastTransitionTime":"2026-02-03T10:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.085604 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.085652 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.085660 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.085673 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.085682 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:11Z","lastTransitionTime":"2026-02-03T10:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.188669 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.188863 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.188922 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.189029 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.189113 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:11Z","lastTransitionTime":"2026-02-03T10:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.291321 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.291612 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.291697 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.291778 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.291847 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:11Z","lastTransitionTime":"2026-02-03T10:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.394714 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.394975 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.395093 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.395202 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.395279 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:11Z","lastTransitionTime":"2026-02-03T10:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.498320 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.498387 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.498400 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.498422 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.498434 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:11Z","lastTransitionTime":"2026-02-03T10:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.501828 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.501993 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.501837 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:11 crc kubenswrapper[5010]: E0203 10:03:11.502167 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:11 crc kubenswrapper[5010]: E0203 10:03:11.502282 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:11 crc kubenswrapper[5010]: E0203 10:03:11.502185 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.506292 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 18:21:54.860528932 +0000 UTC Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.600523 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.600794 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.600960 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.601069 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.601235 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:11Z","lastTransitionTime":"2026-02-03T10:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.703919 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.703969 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.703982 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.704002 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.704014 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:11Z","lastTransitionTime":"2026-02-03T10:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.806517 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.806561 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.806571 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.806586 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.806595 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:11Z","lastTransitionTime":"2026-02-03T10:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.908680 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.909258 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.909273 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.909292 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.909304 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:11Z","lastTransitionTime":"2026-02-03T10:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.011738 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.011797 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.011809 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.011831 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.011849 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:12Z","lastTransitionTime":"2026-02-03T10:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.114829 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.114875 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.114892 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.114915 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.114930 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:12Z","lastTransitionTime":"2026-02-03T10:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.217778 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.217823 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.217831 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.217844 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.217854 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:12Z","lastTransitionTime":"2026-02-03T10:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.319949 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.320016 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.320035 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.320059 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.320072 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:12Z","lastTransitionTime":"2026-02-03T10:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.423400 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.423445 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.423456 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.423472 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.423485 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:12Z","lastTransitionTime":"2026-02-03T10:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.501706 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:12 crc kubenswrapper[5010]: E0203 10:03:12.501886 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.506485 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 21:19:28.94161487 +0000 UTC Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.526235 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.526470 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.526556 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.526656 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.526745 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:12Z","lastTransitionTime":"2026-02-03T10:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.630001 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.630406 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.630577 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.630725 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.630845 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:12Z","lastTransitionTime":"2026-02-03T10:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.733668 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.733984 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.734081 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.734184 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.734284 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:12Z","lastTransitionTime":"2026-02-03T10:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.836857 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.836885 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.836894 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.836909 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.836918 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:12Z","lastTransitionTime":"2026-02-03T10:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.938973 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.939004 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.939013 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.939028 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.939037 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:12Z","lastTransitionTime":"2026-02-03T10:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.041325 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.041359 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.041370 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.041385 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.041397 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:13Z","lastTransitionTime":"2026-02-03T10:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.143913 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.143959 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.143976 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.143999 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.144015 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:13Z","lastTransitionTime":"2026-02-03T10:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.246839 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.246867 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.246876 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.246889 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.246898 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:13Z","lastTransitionTime":"2026-02-03T10:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.349262 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.349293 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.349304 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.349320 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.349332 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:13Z","lastTransitionTime":"2026-02-03T10:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.451789 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.451872 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.451884 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.451899 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.451910 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:13Z","lastTransitionTime":"2026-02-03T10:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.501816 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:13 crc kubenswrapper[5010]: E0203 10:03:13.501945 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.502103 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:13 crc kubenswrapper[5010]: E0203 10:03:13.502145 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.502276 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:13 crc kubenswrapper[5010]: E0203 10:03:13.502330 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.507142 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 13:48:29.332259903 +0000 UTC Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.554764 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.554793 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.554802 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.554815 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.554823 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:13Z","lastTransitionTime":"2026-02-03T10:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.657224 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.657259 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.657270 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.657284 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.657295 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:13Z","lastTransitionTime":"2026-02-03T10:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.759673 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.759717 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.759732 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.759752 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.759767 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:13Z","lastTransitionTime":"2026-02-03T10:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.861698 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.861742 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.861754 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.861769 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.861781 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:13Z","lastTransitionTime":"2026-02-03T10:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.964117 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.964150 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.964165 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.964184 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.964196 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:13Z","lastTransitionTime":"2026-02-03T10:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.066879 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.066927 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.066938 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.066954 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.066967 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:14Z","lastTransitionTime":"2026-02-03T10:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.168860 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.168895 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.168905 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.168918 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.168929 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:14Z","lastTransitionTime":"2026-02-03T10:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.271350 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.271384 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.271395 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.271411 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.271424 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:14Z","lastTransitionTime":"2026-02-03T10:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.373718 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.373762 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.373772 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.373787 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.373795 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:14Z","lastTransitionTime":"2026-02-03T10:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.476546 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.476589 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.476602 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.476619 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.476630 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:14Z","lastTransitionTime":"2026-02-03T10:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.502387 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:14 crc kubenswrapper[5010]: E0203 10:03:14.502501 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.507713 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 14:19:50.93150374 +0000 UTC Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.579283 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.579645 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.579657 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.579674 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.579685 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:14Z","lastTransitionTime":"2026-02-03T10:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.681471 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.681512 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.681524 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.681541 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.681553 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:14Z","lastTransitionTime":"2026-02-03T10:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.784185 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.784242 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.784253 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.784269 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.784280 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:14Z","lastTransitionTime":"2026-02-03T10:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.886056 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.886093 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.886101 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.886114 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.886122 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:14Z","lastTransitionTime":"2026-02-03T10:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.988439 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.988468 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.988477 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.988489 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.988498 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:14Z","lastTransitionTime":"2026-02-03T10:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.091007 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.091057 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.091069 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.091087 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.091098 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:15Z","lastTransitionTime":"2026-02-03T10:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.193276 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.193325 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.193342 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.193364 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.193385 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:15Z","lastTransitionTime":"2026-02-03T10:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.295716 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.295768 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.295785 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.295804 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.295816 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:15Z","lastTransitionTime":"2026-02-03T10:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.397834 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.397868 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.397879 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.397896 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.397907 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:15Z","lastTransitionTime":"2026-02-03T10:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.502612 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.502697 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.502612 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:15 crc kubenswrapper[5010]: E0203 10:03:15.502787 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:15 crc kubenswrapper[5010]: E0203 10:03:15.502910 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:15 crc kubenswrapper[5010]: E0203 10:03:15.502994 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.503323 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.503362 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.503377 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.503398 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.503412 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:15Z","lastTransitionTime":"2026-02-03T10:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.508087 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 13:10:28.370011358 +0000 UTC Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.605500 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.605539 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.605548 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.605566 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.605576 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:15Z","lastTransitionTime":"2026-02-03T10:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.711774 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.711910 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.711926 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.711943 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.711953 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:15Z","lastTransitionTime":"2026-02-03T10:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.814461 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.814713 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.814794 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.814882 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.814967 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:15Z","lastTransitionTime":"2026-02-03T10:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.916917 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.916966 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.916976 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.916994 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.917003 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:15Z","lastTransitionTime":"2026-02-03T10:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.019822 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.020093 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.020243 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.020356 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.020461 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.122731 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.123120 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.123131 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.123144 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.123154 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.225002 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.225323 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.225435 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.225523 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.225609 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.327245 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.327284 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.327296 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.327310 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.327322 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.428732 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.428764 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.428775 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.428789 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.428799 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.501438 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:16 crc kubenswrapper[5010]: E0203 10:03:16.501562 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.509163 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 07:51:29.669211386 +0000 UTC Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.530462 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.530514 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.530526 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.530545 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.530559 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.631826 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.631858 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.631868 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.631883 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.631893 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.733980 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.734013 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.734023 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.734039 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.734052 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.836617 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.836873 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.836936 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.837010 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.837075 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.906227 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.906283 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.906295 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.906313 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.906325 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: E0203 10:03:16.923998 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:16Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.928054 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.928088 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.928097 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.928111 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.928120 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: E0203 10:03:16.939245 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:16Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.942728 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.942756 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.942764 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.942777 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.942787 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: E0203 10:03:16.953946 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:16Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.956991 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.957022 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.957033 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.957049 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.957062 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: E0203 10:03:16.967822 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:16Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.970695 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.970751 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.970768 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.970791 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.970805 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: E0203 10:03:16.982252 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:16Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:16 crc kubenswrapper[5010]: E0203 10:03:16.982442 5010 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.983524 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.983559 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.983571 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.983588 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.983600 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.085556 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.085590 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.085599 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.085615 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.085624 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:17Z","lastTransitionTime":"2026-02-03T10:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.188196 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.188490 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.188678 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.188776 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.188851 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:17Z","lastTransitionTime":"2026-02-03T10:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.291231 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.291440 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.291588 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.291945 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.292078 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:17Z","lastTransitionTime":"2026-02-03T10:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.394903 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.394952 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.394964 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.394978 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.394990 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:17Z","lastTransitionTime":"2026-02-03T10:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.497257 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.497308 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.497318 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.497334 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.497343 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:17Z","lastTransitionTime":"2026-02-03T10:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.501560 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.501570 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:17 crc kubenswrapper[5010]: E0203 10:03:17.501704 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.501577 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:17 crc kubenswrapper[5010]: E0203 10:03:17.501831 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:17 crc kubenswrapper[5010]: E0203 10:03:17.501935 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.510029 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 08:43:29.374038856 +0000 UTC Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.599322 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.599366 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.599377 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.599394 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.599405 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:17Z","lastTransitionTime":"2026-02-03T10:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.702030 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.702056 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.702065 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.702076 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.702085 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:17Z","lastTransitionTime":"2026-02-03T10:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.805017 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.805066 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.805074 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.805089 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.805098 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:17Z","lastTransitionTime":"2026-02-03T10:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.907293 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.907359 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.907376 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.907399 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.907413 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:17Z","lastTransitionTime":"2026-02-03T10:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.009919 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.009947 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.009957 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.009970 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.009979 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:18Z","lastTransitionTime":"2026-02-03T10:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.112244 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.112274 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.112283 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.112298 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.112308 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:18Z","lastTransitionTime":"2026-02-03T10:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.213995 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.214067 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.214079 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.214104 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.214118 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:18Z","lastTransitionTime":"2026-02-03T10:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.316422 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.316463 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.316472 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.316486 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.316495 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:18Z","lastTransitionTime":"2026-02-03T10:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.419768 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.419830 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.419842 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.419858 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.419869 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:18Z","lastTransitionTime":"2026-02-03T10:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.501885 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:18 crc kubenswrapper[5010]: E0203 10:03:18.502102 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.510440 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 18:32:08.730317674 +0000 UTC Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.522065 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.522099 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.522109 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.522143 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.522153 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:18Z","lastTransitionTime":"2026-02-03T10:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.625232 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.625278 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.625291 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.625308 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.625319 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:18Z","lastTransitionTime":"2026-02-03T10:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.727503 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.727541 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.727551 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.727569 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.727581 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:18Z","lastTransitionTime":"2026-02-03T10:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.830505 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.830557 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.830569 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.830586 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.830598 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:18Z","lastTransitionTime":"2026-02-03T10:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.932448 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.932494 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.932506 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.932525 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.932536 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:18Z","lastTransitionTime":"2026-02-03T10:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.035284 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.035324 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.035332 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.035347 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.035357 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:19Z","lastTransitionTime":"2026-02-03T10:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.137397 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.137445 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.137471 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.137488 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.137497 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:19Z","lastTransitionTime":"2026-02-03T10:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.239073 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.239114 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.239126 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.239141 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.239150 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:19Z","lastTransitionTime":"2026-02-03T10:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.341057 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.341111 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.341120 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.341134 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.341149 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:19Z","lastTransitionTime":"2026-02-03T10:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.443058 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.443092 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.443101 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.443115 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.443124 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:19Z","lastTransitionTime":"2026-02-03T10:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.501878 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.501884 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:19 crc kubenswrapper[5010]: E0203 10:03:19.502027 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:19 crc kubenswrapper[5010]: E0203 10:03:19.502092 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.501893 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:19 crc kubenswrapper[5010]: E0203 10:03:19.502182 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.511265 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 15:21:53.989857692 +0000 UTC Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.545006 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.545046 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.545057 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.545073 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.545082 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:19Z","lastTransitionTime":"2026-02-03T10:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.647601 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.647638 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.647649 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.647664 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.647674 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:19Z","lastTransitionTime":"2026-02-03T10:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.750186 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.750346 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.750362 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.750386 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.750398 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:19Z","lastTransitionTime":"2026-02-03T10:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.852391 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.852434 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.852445 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.852461 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.852472 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:19Z","lastTransitionTime":"2026-02-03T10:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.954294 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.954317 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.954325 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.954338 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.954346 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:19Z","lastTransitionTime":"2026-02-03T10:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.057356 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.057399 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.057410 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.057436 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.057452 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:20Z","lastTransitionTime":"2026-02-03T10:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.159928 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.159966 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.159982 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.160004 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.160042 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:20Z","lastTransitionTime":"2026-02-03T10:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.263341 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.263384 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.263393 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.263409 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.263419 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:20Z","lastTransitionTime":"2026-02-03T10:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.366247 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.366304 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.366321 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.366344 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.366361 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:20Z","lastTransitionTime":"2026-02-03T10:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.468396 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.468431 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.468442 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.468457 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.468468 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:20Z","lastTransitionTime":"2026-02-03T10:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.501254 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:20 crc kubenswrapper[5010]: E0203 10:03:20.501389 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.511959 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 14:05:49.827660673 +0000 UTC Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.512447 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.522882 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.534583 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.546385 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.568461 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:08Z\\\",\\\"message\\\":\\\" Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 10:03:08.319356 6739 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:03:08.319342 6739 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.571303 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.571344 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.571361 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.571386 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.571524 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:20Z","lastTransitionTime":"2026-02-03T10:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.579998 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.594261 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.606475 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.620430 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.632123 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.644682 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.657871 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.668765 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.673759 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.673790 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.673801 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.673818 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.673830 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:20Z","lastTransitionTime":"2026-02-03T10:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.679400 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.693230 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.708324 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.719467 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.776206 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.776272 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.776285 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.776300 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.776312 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:20Z","lastTransitionTime":"2026-02-03T10:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.878265 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.878347 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.878359 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.878378 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.878391 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:20Z","lastTransitionTime":"2026-02-03T10:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.980893 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.980928 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.980940 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.980955 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.980965 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:20Z","lastTransitionTime":"2026-02-03T10:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.083527 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.083579 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.083591 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.083609 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.083622 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:21Z","lastTransitionTime":"2026-02-03T10:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.186532 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.186575 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.186586 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.186602 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.186613 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:21Z","lastTransitionTime":"2026-02-03T10:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.289426 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.289489 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.289499 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.289521 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.289538 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:21Z","lastTransitionTime":"2026-02-03T10:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.395793 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.395838 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.395850 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.395869 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.395881 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:21Z","lastTransitionTime":"2026-02-03T10:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.499510 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.499565 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.499580 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.499602 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.499617 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:21Z","lastTransitionTime":"2026-02-03T10:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.501366 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:21 crc kubenswrapper[5010]: E0203 10:03:21.501475 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.501947 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:21 crc kubenswrapper[5010]: E0203 10:03:21.502072 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.502070 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:21 crc kubenswrapper[5010]: E0203 10:03:21.502263 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.511731 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.512519 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 05:37:14.427568288 +0000 UTC Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.602256 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.602289 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.602301 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.602318 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.602330 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:21Z","lastTransitionTime":"2026-02-03T10:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.692165 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:21 crc kubenswrapper[5010]: E0203 10:03:21.692332 5010 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:03:21 crc kubenswrapper[5010]: E0203 10:03:21.692381 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs podName:081d0234-b506-49ff-81c9-c535f6e1c588 nodeName:}" failed. No retries permitted until 2026-02-03 10:03:53.692368558 +0000 UTC m=+103.848344687 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs") pod "network-metrics-daemon-clvdz" (UID: "081d0234-b506-49ff-81c9-c535f6e1c588") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.704765 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.704799 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.704810 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.704826 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.704841 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:21Z","lastTransitionTime":"2026-02-03T10:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.806719 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.806754 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.806765 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.806782 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.806793 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:21Z","lastTransitionTime":"2026-02-03T10:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.908915 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.908937 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.908945 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.908958 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.908970 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:21Z","lastTransitionTime":"2026-02-03T10:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.011247 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.011285 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.011302 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.011317 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.011329 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:22Z","lastTransitionTime":"2026-02-03T10:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.113729 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.113765 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.113773 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.113788 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.113798 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:22Z","lastTransitionTime":"2026-02-03T10:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.216530 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.216591 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.216602 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.216619 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.216633 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:22Z","lastTransitionTime":"2026-02-03T10:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.318596 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.318627 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.318635 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.318647 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.318656 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:22Z","lastTransitionTime":"2026-02-03T10:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.420724 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.420763 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.420775 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.420793 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.420806 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:22Z","lastTransitionTime":"2026-02-03T10:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.501921 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:22 crc kubenswrapper[5010]: E0203 10:03:22.502354 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.502748 5010 scope.go:117] "RemoveContainer" containerID="2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3" Feb 03 10:03:22 crc kubenswrapper[5010]: E0203 10:03:22.502941 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.513173 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 15:07:05.176276559 +0000 UTC Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.523076 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.523117 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.523127 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.523146 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.523157 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:22Z","lastTransitionTime":"2026-02-03T10:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.625809 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.625849 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.625858 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.625873 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.625881 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:22Z","lastTransitionTime":"2026-02-03T10:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.727902 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.727934 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.727944 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.727961 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.727975 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:22Z","lastTransitionTime":"2026-02-03T10:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.830774 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.830804 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.830813 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.830825 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.830833 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:22Z","lastTransitionTime":"2026-02-03T10:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.933174 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.933239 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.933254 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.933274 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.933287 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:22Z","lastTransitionTime":"2026-02-03T10:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.036004 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.036040 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.036049 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.036063 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.036075 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:23Z","lastTransitionTime":"2026-02-03T10:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.138038 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.138076 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.138085 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.138099 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.138108 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:23Z","lastTransitionTime":"2026-02-03T10:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.241158 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.241207 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.241255 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.241273 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.241285 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:23Z","lastTransitionTime":"2026-02-03T10:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.344045 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.344083 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.344097 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.344112 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.344123 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:23Z","lastTransitionTime":"2026-02-03T10:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.446684 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.446719 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.446728 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.446740 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.446751 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:23Z","lastTransitionTime":"2026-02-03T10:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.501108 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.501177 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:23 crc kubenswrapper[5010]: E0203 10:03:23.501258 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:23 crc kubenswrapper[5010]: E0203 10:03:23.501321 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.501483 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:23 crc kubenswrapper[5010]: E0203 10:03:23.501667 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.514018 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 03:27:59.531334178 +0000 UTC Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.549874 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.549912 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.549922 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.549941 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.549952 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:23Z","lastTransitionTime":"2026-02-03T10:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.652426 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.652462 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.652475 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.652492 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.652505 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:23Z","lastTransitionTime":"2026-02-03T10:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.755268 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.755319 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.755334 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.755353 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.755366 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:23Z","lastTransitionTime":"2026-02-03T10:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.857945 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.857987 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.857999 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.858014 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.858028 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:23Z","lastTransitionTime":"2026-02-03T10:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.960567 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.960613 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.960623 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.960636 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.960653 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:23Z","lastTransitionTime":"2026-02-03T10:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.063020 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.063058 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.063067 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.063080 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.063089 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:24Z","lastTransitionTime":"2026-02-03T10:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.165605 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.165635 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.165668 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.165684 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.165717 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:24Z","lastTransitionTime":"2026-02-03T10:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.268355 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.268994 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.269155 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.269189 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.269204 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:24Z","lastTransitionTime":"2026-02-03T10:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.372304 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.372368 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.372385 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.372409 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.372425 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:24Z","lastTransitionTime":"2026-02-03T10:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.474990 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.475107 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.475130 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.475160 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.475182 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:24Z","lastTransitionTime":"2026-02-03T10:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.501589 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:24 crc kubenswrapper[5010]: E0203 10:03:24.501733 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.514162 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 01:52:52.393131014 +0000 UTC Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.577773 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.577832 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.577855 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.577883 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.577905 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:24Z","lastTransitionTime":"2026-02-03T10:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.680762 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.680803 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.680811 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.680826 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.680834 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:24Z","lastTransitionTime":"2026-02-03T10:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.783198 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.783250 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.783260 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.783275 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.783286 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:24Z","lastTransitionTime":"2026-02-03T10:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.886361 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.886414 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.886431 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.886453 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.886470 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:24Z","lastTransitionTime":"2026-02-03T10:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.913471 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f5tpq_8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef/kube-multus/0.log" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.913529 5010 generic.go:334] "Generic (PLEG): container finished" podID="8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef" containerID="b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a" exitCode=1 Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.913558 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f5tpq" event={"ID":"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef","Type":"ContainerDied","Data":"b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a"} Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.913916 5010 scope.go:117] "RemoveContainer" containerID="b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.940133 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.953069 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.966918 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3dd09d-110c-4712-9d1b-d7946d168bbf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25477c6ea277d8a685b77167aab64449e8d3be6ac2a737435f708a81bc183d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.989065 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.989114 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.989133 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.989153 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.989168 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:24Z","lastTransitionTime":"2026-02-03T10:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.990153 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.005066 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.023004 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.035958 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.051508 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.067151 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.082993 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.091490 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.091523 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.091532 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.091549 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.091559 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:25Z","lastTransitionTime":"2026-02-03T10:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.096001 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.110525 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.121653 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.136687 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:23Z\\\",\\\"message\\\":\\\"2026-02-03T10:02:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7\\\\n2026-02-03T10:02:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7 to /host/opt/cni/bin/\\\\n2026-02-03T10:02:38Z [verbose] multus-daemon started\\\\n2026-02-03T10:02:38Z [verbose] Readiness Indicator file check\\\\n2026-02-03T10:03:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.147792 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.167830 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:08Z\\\",\\\"message\\\":\\\" Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 10:03:08.319356 6739 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:03:08.319342 6739 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.180905 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.194908 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.194938 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.194946 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.194961 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.194969 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:25Z","lastTransitionTime":"2026-02-03T10:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.196178 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.297643 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.297670 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.297679 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.297694 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.297704 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:25Z","lastTransitionTime":"2026-02-03T10:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.399875 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.399903 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.399912 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.399925 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.399935 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:25Z","lastTransitionTime":"2026-02-03T10:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.501317 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:25 crc kubenswrapper[5010]: E0203 10:03:25.501410 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.501561 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:25 crc kubenswrapper[5010]: E0203 10:03:25.501609 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.501703 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:25 crc kubenswrapper[5010]: E0203 10:03:25.501741 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.509380 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.509448 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.509471 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.509519 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.509542 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:25Z","lastTransitionTime":"2026-02-03T10:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.514882 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 03:33:53.040087362 +0000 UTC Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.612621 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.612765 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.612784 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.612807 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.612824 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:25Z","lastTransitionTime":"2026-02-03T10:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.715795 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.715835 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.715846 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.715865 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.715876 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:25Z","lastTransitionTime":"2026-02-03T10:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.818167 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.818199 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.818206 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.818239 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.818250 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:25Z","lastTransitionTime":"2026-02-03T10:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.918103 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f5tpq_8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef/kube-multus/0.log" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.918159 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f5tpq" event={"ID":"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef","Type":"ContainerStarted","Data":"d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe"} Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.919302 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.919342 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.919355 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.919372 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.919383 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:25Z","lastTransitionTime":"2026-02-03T10:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.929120 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.939957 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.951360 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:23Z\\\",\\\"message\\\":\\\"2026-02-03T10:02:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7\\\\n2026-02-03T10:02:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7 to /host/opt/cni/bin/\\\\n2026-02-03T10:02:38Z [verbose] multus-daemon started\\\\n2026-02-03T10:02:38Z [verbose] Readiness Indicator file check\\\\n2026-02-03T10:03:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.960435 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.976878 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:08Z\\\",\\\"message\\\":\\\" Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 10:03:08.319356 6739 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:03:08.319342 6739 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.986187 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.999083 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.007763 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:26Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.020542 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:26Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.021401 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.021430 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.021439 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.021455 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.021465 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:26Z","lastTransitionTime":"2026-02-03T10:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.031202 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:26Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.041041 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:26Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.052269 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:26Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.063134 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:26Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.073627 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3dd09d-110c-4712-9d1b-d7946d168bbf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25477c6ea277d8a685b77167aab64449e8d3be6ac2a737435f708a81bc183d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:26Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.089048 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:26Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.105187 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:26Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.117404 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:26Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.123448 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.123489 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.123497 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.123509 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.123519 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:26Z","lastTransitionTime":"2026-02-03T10:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.127745 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:26Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.226410 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.226446 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.226457 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.226474 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.226487 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:26Z","lastTransitionTime":"2026-02-03T10:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.328604 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.328672 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.328691 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.328715 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.328733 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:26Z","lastTransitionTime":"2026-02-03T10:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.431163 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.431189 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.431197 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.431243 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.431262 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:26Z","lastTransitionTime":"2026-02-03T10:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.501379 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:26 crc kubenswrapper[5010]: E0203 10:03:26.501528 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.515247 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 12:32:26.786900285 +0000 UTC Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.533737 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.533804 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.533821 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.533842 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.533857 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:26Z","lastTransitionTime":"2026-02-03T10:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.636584 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.636634 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.636653 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.636675 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.636690 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:26Z","lastTransitionTime":"2026-02-03T10:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.739877 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.739927 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.739939 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.739957 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.739975 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:26Z","lastTransitionTime":"2026-02-03T10:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.842381 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.842445 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.842466 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.842494 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.842517 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:26Z","lastTransitionTime":"2026-02-03T10:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.944518 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.944562 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.944571 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.944585 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.944595 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:26Z","lastTransitionTime":"2026-02-03T10:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.047756 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.047808 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.047823 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.047841 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.047852 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.051595 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.051630 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.051641 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.051654 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.051664 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: E0203 10:03:27.067885 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:27Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.072575 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.072615 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.072629 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.072646 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.072658 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: E0203 10:03:27.087189 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:27Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.095516 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.095572 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.095586 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.095605 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.095626 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: E0203 10:03:27.112452 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:27Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.116568 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.116632 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.116653 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.116680 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.116700 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: E0203 10:03:27.130359 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:27Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.134698 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.134749 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.134761 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.134779 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.134792 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: E0203 10:03:27.147397 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:27Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:27 crc kubenswrapper[5010]: E0203 10:03:27.147570 5010 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.149986 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.150023 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.150034 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.150050 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.150063 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.252594 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.252637 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.252646 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.252662 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.252671 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.355081 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.355115 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.355130 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.355151 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.355168 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.457781 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.457834 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.457850 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.457869 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.457883 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.501621 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.501687 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:27 crc kubenswrapper[5010]: E0203 10:03:27.501765 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:27 crc kubenswrapper[5010]: E0203 10:03:27.501907 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.501975 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:27 crc kubenswrapper[5010]: E0203 10:03:27.502037 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.515419 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 14:47:12.64690788 +0000 UTC Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.559909 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.559947 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.559955 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.559971 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.559980 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.661987 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.662061 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.662085 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.662114 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.662135 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.765269 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.765319 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.765334 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.765353 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.765369 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.867464 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.867517 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.867533 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.867552 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.867566 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.970340 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.970404 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.970422 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.970448 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.970473 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.073847 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.073889 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.073903 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.073918 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.073929 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:28Z","lastTransitionTime":"2026-02-03T10:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.176488 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.176538 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.176550 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.176566 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.176578 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:28Z","lastTransitionTime":"2026-02-03T10:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.278975 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.279058 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.279068 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.279083 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.279092 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:28Z","lastTransitionTime":"2026-02-03T10:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.381586 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.381627 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.381645 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.381662 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.381672 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:28Z","lastTransitionTime":"2026-02-03T10:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.483696 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.483801 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.483825 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.483852 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.483872 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:28Z","lastTransitionTime":"2026-02-03T10:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.502344 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:28 crc kubenswrapper[5010]: E0203 10:03:28.502936 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.516317 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 23:39:35.142523693 +0000 UTC Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.586713 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.586755 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.586777 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.586795 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.586808 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:28Z","lastTransitionTime":"2026-02-03T10:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.689561 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.689604 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.689613 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.689629 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.689638 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:28Z","lastTransitionTime":"2026-02-03T10:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.792915 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.793096 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.793116 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.793138 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.793152 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:28Z","lastTransitionTime":"2026-02-03T10:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.895743 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.895796 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.895805 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.895821 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.895832 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:28Z","lastTransitionTime":"2026-02-03T10:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.998373 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.998445 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.998465 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.998492 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.998511 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:28Z","lastTransitionTime":"2026-02-03T10:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.102068 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.102173 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.102227 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.102247 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.102264 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:29Z","lastTransitionTime":"2026-02-03T10:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.205244 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.205387 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.205465 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.205493 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.205551 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:29Z","lastTransitionTime":"2026-02-03T10:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.308918 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.309004 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.309028 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.309061 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.309084 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:29Z","lastTransitionTime":"2026-02-03T10:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.411495 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.411534 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.411543 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.411555 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.411565 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:29Z","lastTransitionTime":"2026-02-03T10:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.501433 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.501435 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:29 crc kubenswrapper[5010]: E0203 10:03:29.501601 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:29 crc kubenswrapper[5010]: E0203 10:03:29.501674 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.501469 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:29 crc kubenswrapper[5010]: E0203 10:03:29.501790 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.514032 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.514078 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.514091 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.514106 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.514119 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:29Z","lastTransitionTime":"2026-02-03T10:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.517265 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 12:47:36.586029313 +0000 UTC Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.619565 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.619634 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.619648 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.621090 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.621598 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:29Z","lastTransitionTime":"2026-02-03T10:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.724605 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.724653 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.724669 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.724690 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.724706 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:29Z","lastTransitionTime":"2026-02-03T10:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.827366 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.827436 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.827476 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.827507 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.827530 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:29Z","lastTransitionTime":"2026-02-03T10:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.930861 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.930906 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.930916 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.930937 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.930950 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:29Z","lastTransitionTime":"2026-02-03T10:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.034948 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.034996 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.035009 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.035029 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.035042 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:30Z","lastTransitionTime":"2026-02-03T10:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.138205 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.138325 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.138349 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.138377 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.138396 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:30Z","lastTransitionTime":"2026-02-03T10:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.242378 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.242502 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.242515 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.242535 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.242548 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:30Z","lastTransitionTime":"2026-02-03T10:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.345823 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.345887 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.345903 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.345926 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.345946 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:30Z","lastTransitionTime":"2026-02-03T10:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.448932 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.448996 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.449024 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.449059 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.449083 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:30Z","lastTransitionTime":"2026-02-03T10:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.501619 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:30 crc kubenswrapper[5010]: E0203 10:03:30.501745 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.515173 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3dd09d-110c-4712-9d1b-d7946d168bbf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25477c6ea277d8a685b77167aab64449e8d3be6ac2a737435f708a81bc183d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.517451 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 21:20:45.468978074 +0000 UTC Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.529494 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.544173 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.551138 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.551204 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.551262 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.551294 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.551321 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:30Z","lastTransitionTime":"2026-02-03T10:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.555670 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.566098 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.578140 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.591465 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.610786 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.631462 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.650531 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.654518 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.654561 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.654577 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.654599 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.654618 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:30Z","lastTransitionTime":"2026-02-03T10:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.667173 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.684814 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:23Z\\\",\\\"message\\\":\\\"2026-02-03T10:02:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7\\\\n2026-02-03T10:02:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7 to /host/opt/cni/bin/\\\\n2026-02-03T10:02:38Z [verbose] multus-daemon started\\\\n2026-02-03T10:02:38Z [verbose] Readiness Indicator file check\\\\n2026-02-03T10:03:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.698676 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.726971 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:08Z\\\",\\\"message\\\":\\\" Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 10:03:08.319356 6739 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:03:08.319342 6739 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.740036 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.750876 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.756698 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.756737 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.756747 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.756761 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.756771 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:30Z","lastTransitionTime":"2026-02-03T10:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.768662 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.786676 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.858861 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.858911 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.858922 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.858941 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.858955 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:30Z","lastTransitionTime":"2026-02-03T10:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.961389 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.961431 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.961439 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.961455 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.961466 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:30Z","lastTransitionTime":"2026-02-03T10:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.064774 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.064849 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.064868 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.064893 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.064911 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:31Z","lastTransitionTime":"2026-02-03T10:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.168100 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.168171 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.168194 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.168258 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.168286 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:31Z","lastTransitionTime":"2026-02-03T10:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.271282 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.271326 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.271338 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.271356 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.271366 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:31Z","lastTransitionTime":"2026-02-03T10:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.374018 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.374052 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.374062 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.374076 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.374085 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:31Z","lastTransitionTime":"2026-02-03T10:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.476947 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.477004 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.477022 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.477049 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.477083 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:31Z","lastTransitionTime":"2026-02-03T10:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.501606 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.501682 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.501707 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:31 crc kubenswrapper[5010]: E0203 10:03:31.501794 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:31 crc kubenswrapper[5010]: E0203 10:03:31.501900 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:31 crc kubenswrapper[5010]: E0203 10:03:31.502026 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.517882 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 17:43:14.773731812 +0000 UTC Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.580807 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.580857 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.580872 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.580892 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.580906 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:31Z","lastTransitionTime":"2026-02-03T10:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.682914 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.682988 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.683011 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.683033 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.683050 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:31Z","lastTransitionTime":"2026-02-03T10:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.785754 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.785801 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.785812 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.785828 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.785839 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:31Z","lastTransitionTime":"2026-02-03T10:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.888654 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.888720 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.888742 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.888772 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.888794 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:31Z","lastTransitionTime":"2026-02-03T10:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.990298 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.990342 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.990350 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.990366 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.990377 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:31Z","lastTransitionTime":"2026-02-03T10:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.094049 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.094123 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.094148 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.094177 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.094198 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:32Z","lastTransitionTime":"2026-02-03T10:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.196722 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.196773 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.196784 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.196801 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.196813 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:32Z","lastTransitionTime":"2026-02-03T10:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.299462 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.299519 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.299537 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.299559 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.299575 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:32Z","lastTransitionTime":"2026-02-03T10:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.402312 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.402360 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.402371 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.402391 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.402403 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:32Z","lastTransitionTime":"2026-02-03T10:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.502173 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:32 crc kubenswrapper[5010]: E0203 10:03:32.502324 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.504593 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.504624 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.504634 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.504650 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.504662 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:32Z","lastTransitionTime":"2026-02-03T10:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.518986 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 08:08:11.547284054 +0000 UTC Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.607364 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.607402 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.607412 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.607426 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.607436 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:32Z","lastTransitionTime":"2026-02-03T10:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.709630 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.709665 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.709674 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.709687 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.709696 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:32Z","lastTransitionTime":"2026-02-03T10:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.812402 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.812449 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.812461 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.812477 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.812488 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:32Z","lastTransitionTime":"2026-02-03T10:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.916038 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.916089 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.916114 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.916135 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.916182 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:32Z","lastTransitionTime":"2026-02-03T10:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.018277 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.018317 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.018335 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.018352 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.018362 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:33Z","lastTransitionTime":"2026-02-03T10:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.120580 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.120649 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.120673 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.120702 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.120723 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:33Z","lastTransitionTime":"2026-02-03T10:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.223037 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.223091 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.223107 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.223127 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.223144 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:33Z","lastTransitionTime":"2026-02-03T10:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.310192 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.310303 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:37.310283648 +0000 UTC m=+147.466259777 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.310335 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.310365 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.310466 5010 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.310486 5010 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.310513 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:37.310503085 +0000 UTC m=+147.466479214 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.310533 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:37.310520795 +0000 UTC m=+147.466496924 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.325771 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.325825 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.325841 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.325866 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.325888 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:33Z","lastTransitionTime":"2026-02-03T10:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.411535 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.411663 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.411844 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.411868 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.411888 5010 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.411914 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.411965 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.411992 5010 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.411965 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:37.411943519 +0000 UTC m=+147.567919688 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.412108 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:37.412076432 +0000 UTC m=+147.568052631 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.429602 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.429661 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.429684 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.429715 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.429740 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:33Z","lastTransitionTime":"2026-02-03T10:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.501327 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.501367 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.501360 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.501505 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.501814 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.502038 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.520063 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 10:01:46.824495813 +0000 UTC Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.532777 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.532809 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.532817 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.532829 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.532840 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:33Z","lastTransitionTime":"2026-02-03T10:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.635682 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.635743 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.635764 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.635792 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.635815 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:33Z","lastTransitionTime":"2026-02-03T10:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.737938 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.737997 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.738012 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.738032 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.738045 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:33Z","lastTransitionTime":"2026-02-03T10:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.840977 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.841035 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.841049 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.841065 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.841078 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:33Z","lastTransitionTime":"2026-02-03T10:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.943903 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.943938 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.943949 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.943963 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.943975 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:33Z","lastTransitionTime":"2026-02-03T10:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.046450 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.046531 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.046566 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.046595 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.046619 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:34Z","lastTransitionTime":"2026-02-03T10:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.150000 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.150039 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.150053 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.150072 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.150089 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:34Z","lastTransitionTime":"2026-02-03T10:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.252812 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.252855 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.252863 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.252881 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.252892 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:34Z","lastTransitionTime":"2026-02-03T10:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.355798 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.355838 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.355848 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.355865 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.355876 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:34Z","lastTransitionTime":"2026-02-03T10:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.458670 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.458754 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.458788 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.458819 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.458839 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:34Z","lastTransitionTime":"2026-02-03T10:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.506683 5010 scope.go:117] "RemoveContainer" containerID="2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.507189 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:34 crc kubenswrapper[5010]: E0203 10:03:34.507393 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.520661 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 19:05:15.019053234 +0000 UTC Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.560956 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.560992 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.561003 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.561019 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.561031 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:34Z","lastTransitionTime":"2026-02-03T10:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.663697 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.663723 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.663732 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.663745 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.663755 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:34Z","lastTransitionTime":"2026-02-03T10:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.765951 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.765986 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.765994 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.766010 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.766024 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:34Z","lastTransitionTime":"2026-02-03T10:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.871050 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.871106 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.871120 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.871138 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.871174 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:34Z","lastTransitionTime":"2026-02-03T10:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.950421 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/2.log" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.954381 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db"} Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.954879 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.973281 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.973276 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:34Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.973326 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.973434 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.973454 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.973479 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:34Z","lastTransitionTime":"2026-02-03T10:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.024740 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:08Z\\\",\\\"message\\\":\\\" Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 10:03:08.319356 6739 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:03:08.319342 6739 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.034107 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.048303 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.059396 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.069110 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.075728 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.075769 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.075780 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.075797 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.075806 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:35Z","lastTransitionTime":"2026-02-03T10:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.080817 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:23Z\\\",\\\"message\\\":\\\"2026-02-03T10:02:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7\\\\n2026-02-03T10:02:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7 to /host/opt/cni/bin/\\\\n2026-02-03T10:02:38Z [verbose] multus-daemon started\\\\n2026-02-03T10:02:38Z [verbose] Readiness Indicator file check\\\\n2026-02-03T10:03:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.090351 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.103552 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.116939 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.126730 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3dd09d-110c-4712-9d1b-d7946d168bbf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25477c6ea277d8a685b77167aab64449e8d3be6ac2a737435f708a81bc183d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.142493 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.154398 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.165867 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.176500 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.178077 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.178117 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.178127 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.178143 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.178152 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:35Z","lastTransitionTime":"2026-02-03T10:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.189452 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.199574 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.212554 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.283425 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.283464 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.283473 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.283488 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.283497 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:35Z","lastTransitionTime":"2026-02-03T10:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.386204 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.386305 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.386323 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.386348 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.386364 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:35Z","lastTransitionTime":"2026-02-03T10:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.488980 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.489023 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.489032 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.489047 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.489056 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:35Z","lastTransitionTime":"2026-02-03T10:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.501326 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.501389 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.501403 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:35 crc kubenswrapper[5010]: E0203 10:03:35.501458 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:35 crc kubenswrapper[5010]: E0203 10:03:35.501528 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:35 crc kubenswrapper[5010]: E0203 10:03:35.501620 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.521771 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 01:56:35.059646578 +0000 UTC Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.591202 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.591295 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.591319 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.591346 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.591366 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:35Z","lastTransitionTime":"2026-02-03T10:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.693834 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.693878 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.693889 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.693901 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.693910 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:35Z","lastTransitionTime":"2026-02-03T10:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.795828 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.795872 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.795881 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.795908 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.795918 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:35Z","lastTransitionTime":"2026-02-03T10:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.897482 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.897525 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.897535 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.897549 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.897560 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:35Z","lastTransitionTime":"2026-02-03T10:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.958618 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/3.log" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.959332 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/2.log" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.962749 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" exitCode=1 Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.962806 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db"} Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.962874 5010 scope.go:117] "RemoveContainer" containerID="2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.966124 5010 scope.go:117] "RemoveContainer" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" Feb 03 10:03:35 crc kubenswrapper[5010]: E0203 10:03:35.966553 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.978523 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.991975 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.999848 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.999897 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.999912 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:35.999933 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:35.999949 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:35Z","lastTransitionTime":"2026-02-03T10:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.006545 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.022902 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:23Z\\\",\\\"message\\\":\\\"2026-02-03T10:02:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7\\\\n2026-02-03T10:02:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7 to /host/opt/cni/bin/\\\\n2026-02-03T10:02:38Z [verbose] multus-daemon started\\\\n2026-02-03T10:02:38Z [verbose] Readiness Indicator file check\\\\n2026-02-03T10:03:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.038167 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.053458 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:08Z\\\",\\\"message\\\":\\\" Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 10:03:08.319356 6739 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:03:08.319342 6739 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:35Z\\\",\\\"message\\\":\\\"omment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 10:03:35.411596 7160 services_controller.go:451] Built service openshift-marketplace/certified-operators cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 10:03:35.411611 7160 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.062339 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.072521 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.083391 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.092331 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.101800 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.101849 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.101859 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.101873 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.101883 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:36Z","lastTransitionTime":"2026-02-03T10:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.102775 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.117697 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.131476 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.142334 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.154680 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3dd09d-110c-4712-9d1b-d7946d168bbf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25477c6ea277d8a685b77167aab64449e8d3be6ac2a737435f708a81bc183d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.165598 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.175119 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.193124 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.203437 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.203470 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.203482 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.203498 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.203510 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:36Z","lastTransitionTime":"2026-02-03T10:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.305781 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.305838 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.305850 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.305869 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.305882 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:36Z","lastTransitionTime":"2026-02-03T10:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.408469 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.409139 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.409189 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.409231 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.409249 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:36Z","lastTransitionTime":"2026-02-03T10:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.501822 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:36 crc kubenswrapper[5010]: E0203 10:03:36.501986 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.511709 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.511739 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.511748 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.511759 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.511768 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:36Z","lastTransitionTime":"2026-02-03T10:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.522274 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 10:28:31.291849512 +0000 UTC Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.614996 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.615045 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.615061 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.615088 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.615122 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:36Z","lastTransitionTime":"2026-02-03T10:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.718329 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.718395 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.718416 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.718436 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.718452 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:36Z","lastTransitionTime":"2026-02-03T10:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.821298 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.821381 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.821405 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.821434 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.821458 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:36Z","lastTransitionTime":"2026-02-03T10:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.927452 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.927787 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.927811 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.927840 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.927861 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:36Z","lastTransitionTime":"2026-02-03T10:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.968542 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/3.log" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.973510 5010 scope.go:117] "RemoveContainer" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" Feb 03 10:03:36 crc kubenswrapper[5010]: E0203 10:03:36.973763 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.992500 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.010625 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.023355 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3dd09d-110c-4712-9d1b-d7946d168bbf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25477c6ea277d8a685b77167aab64449e8d3be6ac2a737435f708a81bc183d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.031537 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.031574 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.031584 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.031600 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.031611 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.038796 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.051341 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.064252 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.078685 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.096687 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.111408 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.132636 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.133844 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.133883 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.133895 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.133911 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.133923 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.146534 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.159724 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.172061 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.183386 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.196439 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:23Z\\\",\\\"message\\\":\\\"2026-02-03T10:02:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7\\\\n2026-02-03T10:02:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7 to /host/opt/cni/bin/\\\\n2026-02-03T10:02:38Z [verbose] multus-daemon started\\\\n2026-02-03T10:02:38Z [verbose] Readiness Indicator file check\\\\n2026-02-03T10:03:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.210585 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.236350 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.236435 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.236453 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.236505 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.236527 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.243313 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:35Z\\\",\\\"message\\\":\\\"omment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 10:03:35.411596 7160 services_controller.go:451] Built service openshift-marketplace/certified-operators cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 10:03:35.411611 7160 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.257901 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.339660 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.339702 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.339711 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.339724 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.339734 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.442704 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.442730 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.442739 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.442751 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.442759 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.501055 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:37 crc kubenswrapper[5010]: E0203 10:03:37.501178 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.501360 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:37 crc kubenswrapper[5010]: E0203 10:03:37.501410 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.501544 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:37 crc kubenswrapper[5010]: E0203 10:03:37.501739 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.502514 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.502661 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.502757 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.502854 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.502940 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: E0203 10:03:37.520520 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.522469 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 19:27:53.983199686 +0000 UTC Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.526378 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.526640 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.526919 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.527098 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.527314 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: E0203 10:03:37.543852 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.547631 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.547667 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.547683 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.547699 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.547711 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: E0203 10:03:37.561874 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.566115 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.566157 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.566176 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.566202 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.566245 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: E0203 10:03:37.578633 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.582596 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.582632 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.582643 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.582661 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.582676 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: E0203 10:03:37.598338 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: E0203 10:03:37.598570 5010 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.603698 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.603727 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.603735 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.603748 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.603756 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.707744 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.707795 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.707807 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.707825 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.707841 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.811007 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.811052 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.811064 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.811081 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.811094 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.914734 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.914801 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.914816 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.914840 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.914856 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.017937 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.017975 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.017984 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.018002 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.018014 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:38Z","lastTransitionTime":"2026-02-03T10:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.120117 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.120427 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.120535 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.120701 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.120821 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:38Z","lastTransitionTime":"2026-02-03T10:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.223531 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.223858 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.224018 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.224167 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.224344 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:38Z","lastTransitionTime":"2026-02-03T10:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.327810 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.327859 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.327874 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.327892 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.327903 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:38Z","lastTransitionTime":"2026-02-03T10:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.430195 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.430272 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.430288 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.430308 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.430322 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:38Z","lastTransitionTime":"2026-02-03T10:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.501762 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:38 crc kubenswrapper[5010]: E0203 10:03:38.501934 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.524096 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 09:54:05.381243303 +0000 UTC Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.533554 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.533606 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.533625 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.533645 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.533662 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:38Z","lastTransitionTime":"2026-02-03T10:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.637112 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.637197 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.637283 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.637320 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.637358 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:38Z","lastTransitionTime":"2026-02-03T10:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.740455 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.740495 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.740505 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.740519 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.740528 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:38Z","lastTransitionTime":"2026-02-03T10:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.843499 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.843534 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.843543 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.843556 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.843564 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:38Z","lastTransitionTime":"2026-02-03T10:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.946584 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.946858 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.946947 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.947039 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.947126 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:38Z","lastTransitionTime":"2026-02-03T10:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.049933 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.049978 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.049993 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.050013 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.050027 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:39Z","lastTransitionTime":"2026-02-03T10:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.152546 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.152590 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.152600 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.152615 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.152625 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:39Z","lastTransitionTime":"2026-02-03T10:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.254858 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.254903 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.254914 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.254941 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.254953 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:39Z","lastTransitionTime":"2026-02-03T10:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.357281 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.357353 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.357377 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.357406 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.357427 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:39Z","lastTransitionTime":"2026-02-03T10:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.459677 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.459716 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.459730 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.459749 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.459763 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:39Z","lastTransitionTime":"2026-02-03T10:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.501904 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.502007 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:39 crc kubenswrapper[5010]: E0203 10:03:39.502073 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:39 crc kubenswrapper[5010]: E0203 10:03:39.502149 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.501908 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:39 crc kubenswrapper[5010]: E0203 10:03:39.502433 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.524778 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 14:14:09.337942731 +0000 UTC Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.561986 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.562073 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.562109 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.562145 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.562170 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:39Z","lastTransitionTime":"2026-02-03T10:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.665119 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.665157 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.665167 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.665183 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.665193 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:39Z","lastTransitionTime":"2026-02-03T10:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.768368 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.768406 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.768418 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.768434 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.768446 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:39Z","lastTransitionTime":"2026-02-03T10:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.870659 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.870719 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.870738 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.870756 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.870769 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:39Z","lastTransitionTime":"2026-02-03T10:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.974119 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.974569 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.974775 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.974967 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.975140 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:39Z","lastTransitionTime":"2026-02-03T10:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.078795 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.078860 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.078878 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.078905 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.078926 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:40Z","lastTransitionTime":"2026-02-03T10:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.182465 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.182856 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.183042 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.183317 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.183563 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:40Z","lastTransitionTime":"2026-02-03T10:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.286289 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.286588 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.286678 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.286765 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.286847 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:40Z","lastTransitionTime":"2026-02-03T10:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.392771 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.392843 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.392856 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.393235 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.393259 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:40Z","lastTransitionTime":"2026-02-03T10:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.496303 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.496348 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.496360 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.496375 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.496400 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:40Z","lastTransitionTime":"2026-02-03T10:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.501695 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:40 crc kubenswrapper[5010]: E0203 10:03:40.501792 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.519530 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.525596 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 15:12:21.273161983 +0000 UTC Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.532830 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.546401 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.558113 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.568955 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:23Z\\\",\\\"message\\\":\\\"2026-02-03T10:02:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7\\\\n2026-02-03T10:02:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7 to /host/opt/cni/bin/\\\\n2026-02-03T10:02:38Z [verbose] multus-daemon started\\\\n2026-02-03T10:02:38Z [verbose] Readiness Indicator file check\\\\n2026-02-03T10:03:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.580647 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.596610 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:35Z\\\",\\\"message\\\":\\\"omment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 10:03:35.411596 7160 services_controller.go:451] Built service openshift-marketplace/certified-operators cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 10:03:35.411611 7160 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.599137 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.599185 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.599194 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.599229 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.599240 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:40Z","lastTransitionTime":"2026-02-03T10:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.606488 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.618520 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.675720 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.685108 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.699768 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.701428 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.701449 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.701457 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.701470 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.701480 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:40Z","lastTransitionTime":"2026-02-03T10:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.711071 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.722901 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.734434 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.744111 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3dd09d-110c-4712-9d1b-d7946d168bbf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25477c6ea277d8a685b77167aab64449e8d3be6ac2a737435f708a81bc183d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.754009 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.764151 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.803528 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.803568 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.803579 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.803593 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.803603 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:40Z","lastTransitionTime":"2026-02-03T10:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.906463 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.906536 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.906548 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.906568 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.906580 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:40Z","lastTransitionTime":"2026-02-03T10:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.009338 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.009390 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.009405 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.009428 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.009444 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:41Z","lastTransitionTime":"2026-02-03T10:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.113243 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.113314 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.113333 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.113355 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.113373 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:41Z","lastTransitionTime":"2026-02-03T10:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.216566 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.216632 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.216650 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.216673 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.216690 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:41Z","lastTransitionTime":"2026-02-03T10:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.319949 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.319992 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.320004 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.320021 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.320031 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:41Z","lastTransitionTime":"2026-02-03T10:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.426562 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.426606 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.426617 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.426636 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.426651 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:41Z","lastTransitionTime":"2026-02-03T10:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.502055 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.502055 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:41 crc kubenswrapper[5010]: E0203 10:03:41.502763 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.502103 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:41 crc kubenswrapper[5010]: E0203 10:03:41.502883 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:41 crc kubenswrapper[5010]: E0203 10:03:41.502634 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.527379 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 18:20:08.914653323 +0000 UTC Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.529154 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.529230 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.529241 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.529253 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.529264 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:41Z","lastTransitionTime":"2026-02-03T10:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.632733 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.632802 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.632828 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.632852 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.632869 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:41Z","lastTransitionTime":"2026-02-03T10:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.735392 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.735445 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.735456 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.735473 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.735484 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:41Z","lastTransitionTime":"2026-02-03T10:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.839185 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.839261 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.839281 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.839301 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.839315 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:41Z","lastTransitionTime":"2026-02-03T10:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.941743 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.941779 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.941789 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.941804 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.941815 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:41Z","lastTransitionTime":"2026-02-03T10:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.044462 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.044805 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.044945 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.045065 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.045161 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:42Z","lastTransitionTime":"2026-02-03T10:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.148035 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.148073 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.148085 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.148100 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.148109 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:42Z","lastTransitionTime":"2026-02-03T10:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.250454 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.250514 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.250530 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.250586 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.250603 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:42Z","lastTransitionTime":"2026-02-03T10:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.353237 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.353286 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.353298 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.353315 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.353326 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:42Z","lastTransitionTime":"2026-02-03T10:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.456691 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.456907 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.456978 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.457104 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.457180 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:42Z","lastTransitionTime":"2026-02-03T10:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.501375 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:42 crc kubenswrapper[5010]: E0203 10:03:42.501524 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.528265 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 11:50:19.647605731 +0000 UTC Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.559761 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.559804 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.559817 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.559839 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.559851 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:42Z","lastTransitionTime":"2026-02-03T10:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.661436 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.661469 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.661477 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.661489 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.661499 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:42Z","lastTransitionTime":"2026-02-03T10:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.764525 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.764573 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.764591 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.764617 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.764635 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:42Z","lastTransitionTime":"2026-02-03T10:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.866844 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.866882 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.866890 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.866904 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.866914 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:42Z","lastTransitionTime":"2026-02-03T10:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.969853 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.969917 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.969934 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.969963 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.969983 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:42Z","lastTransitionTime":"2026-02-03T10:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.072986 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.073048 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.073067 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.073092 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.073109 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:43Z","lastTransitionTime":"2026-02-03T10:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.176451 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.176724 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.176795 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.176861 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.176933 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:43Z","lastTransitionTime":"2026-02-03T10:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.280698 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.280770 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.280809 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.280841 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.280864 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:43Z","lastTransitionTime":"2026-02-03T10:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.384569 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.384609 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.384618 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.384633 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.384643 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:43Z","lastTransitionTime":"2026-02-03T10:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.487619 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.487663 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.487674 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.487690 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.487703 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:43Z","lastTransitionTime":"2026-02-03T10:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.501261 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.501371 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.501425 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:43 crc kubenswrapper[5010]: E0203 10:03:43.501384 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:43 crc kubenswrapper[5010]: E0203 10:03:43.501632 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:43 crc kubenswrapper[5010]: E0203 10:03:43.501919 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.529702 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 10:37:43.618484392 +0000 UTC Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.591276 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.591330 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.591341 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.591361 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.591373 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:43Z","lastTransitionTime":"2026-02-03T10:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.694514 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.694560 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.694569 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.694585 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.694599 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:43Z","lastTransitionTime":"2026-02-03T10:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.797607 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.797711 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.797729 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.797756 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.797776 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:43Z","lastTransitionTime":"2026-02-03T10:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.901377 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.901420 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.901429 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.901442 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.901451 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:43Z","lastTransitionTime":"2026-02-03T10:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.003951 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.004022 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.004042 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.004066 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.004084 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:44Z","lastTransitionTime":"2026-02-03T10:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.108007 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.108042 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.108051 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.108065 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.108075 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:44Z","lastTransitionTime":"2026-02-03T10:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.211721 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.211833 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.211855 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.211878 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.211895 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:44Z","lastTransitionTime":"2026-02-03T10:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.315878 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.315963 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.315990 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.316021 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.316057 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:44Z","lastTransitionTime":"2026-02-03T10:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.419426 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.419474 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.419483 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.419500 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.419511 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:44Z","lastTransitionTime":"2026-02-03T10:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.502204 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:44 crc kubenswrapper[5010]: E0203 10:03:44.502407 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.521784 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.521830 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.521841 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.521858 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.521868 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:44Z","lastTransitionTime":"2026-02-03T10:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.529779 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 21:46:46.767642633 +0000 UTC Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.624200 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.624256 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.624269 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.624284 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.624294 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:44Z","lastTransitionTime":"2026-02-03T10:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.726920 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.726996 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.727020 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.727048 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.727067 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:44Z","lastTransitionTime":"2026-02-03T10:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.829312 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.829347 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.829356 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.829369 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.829378 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:44Z","lastTransitionTime":"2026-02-03T10:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.932104 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.932150 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.932166 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.932188 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.932246 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:44Z","lastTransitionTime":"2026-02-03T10:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.034418 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.034450 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.034459 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.034472 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.034482 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:45Z","lastTransitionTime":"2026-02-03T10:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.137089 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.137151 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.137162 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.137179 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.137191 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:45Z","lastTransitionTime":"2026-02-03T10:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.239076 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.239307 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.239334 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.239365 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.239390 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:45Z","lastTransitionTime":"2026-02-03T10:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.341949 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.342000 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.342014 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.342031 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.342043 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:45Z","lastTransitionTime":"2026-02-03T10:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.444659 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.444854 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.444911 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.445016 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.445099 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:45Z","lastTransitionTime":"2026-02-03T10:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.501529 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:45 crc kubenswrapper[5010]: E0203 10:03:45.501965 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.501623 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:45 crc kubenswrapper[5010]: E0203 10:03:45.502186 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.501623 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:45 crc kubenswrapper[5010]: E0203 10:03:45.502443 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.530155 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 05:22:57.517874439 +0000 UTC Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.547085 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.547133 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.547145 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.547164 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.547177 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:45Z","lastTransitionTime":"2026-02-03T10:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.650200 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.650316 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.650337 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.650369 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.650390 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:45Z","lastTransitionTime":"2026-02-03T10:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.753392 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.753712 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.753726 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.753741 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.753751 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:45Z","lastTransitionTime":"2026-02-03T10:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.856431 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.856759 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.856857 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.856944 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.857041 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:45Z","lastTransitionTime":"2026-02-03T10:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.960201 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.960484 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.960585 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.960669 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.960744 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:45Z","lastTransitionTime":"2026-02-03T10:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.063492 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.063804 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.063920 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.064041 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.064151 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:46Z","lastTransitionTime":"2026-02-03T10:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.167280 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.167574 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.167731 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.167860 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.168100 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:46Z","lastTransitionTime":"2026-02-03T10:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.271620 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.271823 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.272036 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.272162 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.272352 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:46Z","lastTransitionTime":"2026-02-03T10:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.374257 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.374298 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.374311 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.374327 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.374339 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:46Z","lastTransitionTime":"2026-02-03T10:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.476591 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.476629 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.476639 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.476654 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.476665 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:46Z","lastTransitionTime":"2026-02-03T10:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.502156 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:46 crc kubenswrapper[5010]: E0203 10:03:46.502318 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.531234 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 14:04:41.075540244 +0000 UTC Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.579128 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.579171 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.579193 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.579233 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.579243 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:46Z","lastTransitionTime":"2026-02-03T10:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.681579 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.681616 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.681624 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.681639 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.681650 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:46Z","lastTransitionTime":"2026-02-03T10:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.784536 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.784580 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.784591 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.784606 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.784617 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:46Z","lastTransitionTime":"2026-02-03T10:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.887253 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.887290 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.887298 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.887312 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.887323 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:46Z","lastTransitionTime":"2026-02-03T10:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.990453 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.990492 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.990503 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.990517 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.990528 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:46Z","lastTransitionTime":"2026-02-03T10:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.093341 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.093422 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.093443 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.093471 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.093494 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.196708 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.196762 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.196778 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.196798 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.196818 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.299524 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.299569 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.299581 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.299624 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.299643 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.402803 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.402845 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.402857 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.402873 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.402887 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.501609 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:47 crc kubenswrapper[5010]: E0203 10:03:47.501765 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.501863 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:47 crc kubenswrapper[5010]: E0203 10:03:47.501942 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.502009 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:47 crc kubenswrapper[5010]: E0203 10:03:47.502074 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.505602 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.505639 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.505652 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.505666 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.505680 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.532155 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 21:51:15.085046302 +0000 UTC Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.607894 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.607940 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.607951 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.607976 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.607988 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.710515 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.710573 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.710596 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.710625 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.710646 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.813094 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.813133 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.813144 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.813159 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.813168 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.898632 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.898669 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.898679 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.898692 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.898702 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: E0203 10:03:47.910188 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.914069 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.914108 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.914120 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.914136 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.914147 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: E0203 10:03:47.928625 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.932117 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.932183 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.932191 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.932204 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.932236 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: E0203 10:03:47.945400 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.948667 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.948711 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.948723 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.948738 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.948752 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: E0203 10:03:47.961527 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.964985 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.965030 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.965049 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.965074 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.965090 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: E0203 10:03:47.976834 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:47 crc kubenswrapper[5010]: E0203 10:03:47.976972 5010 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.978346 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.978377 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.978410 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.978428 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.978440 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.081245 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.081303 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.081315 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.081331 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.081342 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:48Z","lastTransitionTime":"2026-02-03T10:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.185092 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.185481 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.185594 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.185721 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.185854 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:48Z","lastTransitionTime":"2026-02-03T10:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.288473 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.288768 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.288876 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.288998 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.289132 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:48Z","lastTransitionTime":"2026-02-03T10:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.392012 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.392067 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.392083 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.392111 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.392126 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:48Z","lastTransitionTime":"2026-02-03T10:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.494314 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.494342 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.494352 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.494366 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.494375 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:48Z","lastTransitionTime":"2026-02-03T10:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.502559 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:48 crc kubenswrapper[5010]: E0203 10:03:48.502685 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.533180 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 14:16:22.335549283 +0000 UTC Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.597469 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.597551 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.597582 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.597610 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.597630 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:48Z","lastTransitionTime":"2026-02-03T10:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.700523 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.700579 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.700587 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.700614 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.700624 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:48Z","lastTransitionTime":"2026-02-03T10:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.803647 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.803702 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.803713 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.803735 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.803749 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:48Z","lastTransitionTime":"2026-02-03T10:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.906412 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.906492 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.906518 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.906546 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.906571 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:48Z","lastTransitionTime":"2026-02-03T10:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.135822 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.135859 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.135870 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.135885 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.135896 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:49Z","lastTransitionTime":"2026-02-03T10:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.238928 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.238980 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.239008 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.239033 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.239052 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:49Z","lastTransitionTime":"2026-02-03T10:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.341779 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.341824 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.341834 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.341852 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.341864 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:49Z","lastTransitionTime":"2026-02-03T10:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.444512 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.444546 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.444555 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.444569 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.444578 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:49Z","lastTransitionTime":"2026-02-03T10:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.501401 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:49 crc kubenswrapper[5010]: E0203 10:03:49.501541 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.501551 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.501646 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:49 crc kubenswrapper[5010]: E0203 10:03:49.501843 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:49 crc kubenswrapper[5010]: E0203 10:03:49.502492 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.503390 5010 scope.go:117] "RemoveContainer" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" Feb 03 10:03:49 crc kubenswrapper[5010]: E0203 10:03:49.505491 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.520261 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.533646 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 07:43:03.463764154 +0000 UTC Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.548175 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.548252 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.548268 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.548294 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.548310 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:49Z","lastTransitionTime":"2026-02-03T10:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.652172 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.652229 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.652238 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.652253 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.652262 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:49Z","lastTransitionTime":"2026-02-03T10:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.755440 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.755517 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.755557 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.755588 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.755612 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:49Z","lastTransitionTime":"2026-02-03T10:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.858618 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.858661 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.858671 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.858687 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.858696 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:49Z","lastTransitionTime":"2026-02-03T10:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.963599 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.963637 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.963647 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.963664 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.963675 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:49Z","lastTransitionTime":"2026-02-03T10:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.066914 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.067018 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.067052 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.067085 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.067112 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:50Z","lastTransitionTime":"2026-02-03T10:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.169423 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.169461 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.169470 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.169505 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.169515 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:50Z","lastTransitionTime":"2026-02-03T10:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.272124 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.272151 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.272159 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.272171 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.272180 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:50Z","lastTransitionTime":"2026-02-03T10:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.374077 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.374124 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.374137 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.374154 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.374163 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:50Z","lastTransitionTime":"2026-02-03T10:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.476559 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.476585 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.476596 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.476608 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.476618 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:50Z","lastTransitionTime":"2026-02-03T10:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.502012 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:50 crc kubenswrapper[5010]: E0203 10:03:50.502243 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.519629 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.534569 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 16:40:58.691863461 +0000 UTC Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.535790 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.550160 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:23Z\\\",\\\"message\\\":\\\"2026-02-03T10:02:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7\\\\n2026-02-03T10:02:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7 to /host/opt/cni/bin/\\\\n2026-02-03T10:02:38Z [verbose] multus-daemon started\\\\n2026-02-03T10:02:38Z [verbose] Readiness Indicator file check\\\\n2026-02-03T10:03:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.561575 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.579044 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.579092 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.579104 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.579130 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.579141 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:50Z","lastTransitionTime":"2026-02-03T10:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.579598 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:35Z\\\",\\\"message\\\":\\\"omment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 10:03:35.411596 7160 services_controller.go:451] Built service openshift-marketplace/certified-operators cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 10:03:35.411611 7160 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.590013 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.638136 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478f7c29-f920-438f-bd2f-834ad379acce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33c6da9549a593611fce2b9ac2e1730afa277e407ab3d553648c86cca72df9dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf4d9a81d55d48408fc220d27652171a691f91f84894a35677f27f1ea9beaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9336946ed9378970e4cf4204dae54c84331a56d8bb0c34a96a18756a03564c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32bb7e23791044ac62b774a809eefec90c37195581f3a062ec0328a0f3156771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64179c9dc656cd2ae54ef87a2dd73427521252105f7f7db946b69951cf308654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3172dc98f9bd36f672f65272b6ef0548d5ab55e45c8d1c3309735fc3d20a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd3172dc98f9bd36f672f65272b6ef0548d5ab55e45c8d1c3309735fc3d20a46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95a84d597354ad5b8f4b36049c29ec5bef9982f82c988bba69e9fbc77958032e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a84d597354ad5b8f4b36049c29ec5bef9982f82c988bba69e9fbc77958032e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a5ba2a290693520ab1c03bfcf9baa02768d6112f452c205d187b827ec065860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a5ba2a290693520ab1c03bfcf9baa02768d6112f452c205d187b827ec065860\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.674733 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.681978 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.682042 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.682054 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.682070 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.682080 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:50Z","lastTransitionTime":"2026-02-03T10:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.686973 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.701505 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.715307 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.728991 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.744062 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.757787 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.769288 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3dd09d-110c-4712-9d1b-d7946d168bbf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25477c6ea277d8a685b77167aab64449e8d3be6ac2a737435f708a81bc183d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.782749 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.784309 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.784392 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.784408 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.784432 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.784447 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:50Z","lastTransitionTime":"2026-02-03T10:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.798715 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.814090 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.826957 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.887056 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.887120 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.887130 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.887142 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.887151 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:50Z","lastTransitionTime":"2026-02-03T10:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.990038 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.990130 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.990155 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.990179 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.990197 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:50Z","lastTransitionTime":"2026-02-03T10:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.092766 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.092803 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.092814 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.092828 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.092841 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:51Z","lastTransitionTime":"2026-02-03T10:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.195406 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.195448 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.195460 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.195478 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.195489 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:51Z","lastTransitionTime":"2026-02-03T10:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.297940 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.297983 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.297992 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.298004 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.298015 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:51Z","lastTransitionTime":"2026-02-03T10:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.400782 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.400826 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.400841 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.400861 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.400886 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:51Z","lastTransitionTime":"2026-02-03T10:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.501305 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.501334 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.501305 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:51 crc kubenswrapper[5010]: E0203 10:03:51.501487 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:51 crc kubenswrapper[5010]: E0203 10:03:51.501519 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:51 crc kubenswrapper[5010]: E0203 10:03:51.501584 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.502651 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.502675 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.502683 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.502693 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.502704 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:51Z","lastTransitionTime":"2026-02-03T10:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.535362 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 22:23:13.915078467 +0000 UTC Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.605812 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.605861 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.605873 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.605890 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.605904 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:51Z","lastTransitionTime":"2026-02-03T10:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.709040 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.709097 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.709107 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.709127 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.709144 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:51Z","lastTransitionTime":"2026-02-03T10:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.812407 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.812504 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.812519 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.812544 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.812561 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:51Z","lastTransitionTime":"2026-02-03T10:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.915265 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.915325 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.915336 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.915356 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.915367 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:51Z","lastTransitionTime":"2026-02-03T10:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.017603 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.017644 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.017655 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.017674 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.017688 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:52Z","lastTransitionTime":"2026-02-03T10:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.120024 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.120071 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.120086 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.120106 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.120121 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:52Z","lastTransitionTime":"2026-02-03T10:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.222159 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.222254 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.222269 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.222286 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.222322 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:52Z","lastTransitionTime":"2026-02-03T10:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.324421 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.324462 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.324473 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.324491 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.324503 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:52Z","lastTransitionTime":"2026-02-03T10:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.427143 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.427228 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.427245 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.427261 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.427272 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:52Z","lastTransitionTime":"2026-02-03T10:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.501846 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:52 crc kubenswrapper[5010]: E0203 10:03:52.502048 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.529089 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.529134 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.529149 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.529164 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.529175 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:52Z","lastTransitionTime":"2026-02-03T10:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.536441 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 16:16:33.453514951 +0000 UTC Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.632013 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.632089 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.632110 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.632126 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.632136 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:52Z","lastTransitionTime":"2026-02-03T10:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.734935 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.734981 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.734993 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.735012 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.735025 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:52Z","lastTransitionTime":"2026-02-03T10:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.837881 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.837973 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.838000 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.838030 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.838055 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:52Z","lastTransitionTime":"2026-02-03T10:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.940138 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.940189 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.940201 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.940240 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.940253 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:52Z","lastTransitionTime":"2026-02-03T10:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.043541 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.043602 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.043612 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.043626 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.043635 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:53Z","lastTransitionTime":"2026-02-03T10:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.145264 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.145290 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.145297 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.145309 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.145318 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:53Z","lastTransitionTime":"2026-02-03T10:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.247578 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.247642 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.247660 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.247679 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.247693 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:53Z","lastTransitionTime":"2026-02-03T10:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.349811 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.349873 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.349898 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.349922 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.349937 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:53Z","lastTransitionTime":"2026-02-03T10:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.452429 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.452472 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.452485 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.452502 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.452513 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:53Z","lastTransitionTime":"2026-02-03T10:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.501106 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.501125 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:53 crc kubenswrapper[5010]: E0203 10:03:53.501263 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.501279 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:53 crc kubenswrapper[5010]: E0203 10:03:53.501332 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:53 crc kubenswrapper[5010]: E0203 10:03:53.501378 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.537043 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 23:23:10.316776404 +0000 UTC Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.554706 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.554741 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.554752 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.554767 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.554778 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:53Z","lastTransitionTime":"2026-02-03T10:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.657121 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.657189 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.657202 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.657236 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.657249 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:53Z","lastTransitionTime":"2026-02-03T10:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.728808 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:53 crc kubenswrapper[5010]: E0203 10:03:53.729005 5010 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:03:53 crc kubenswrapper[5010]: E0203 10:03:53.729086 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs podName:081d0234-b506-49ff-81c9-c535f6e1c588 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:57.729066997 +0000 UTC m=+167.885043126 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs") pod "network-metrics-daemon-clvdz" (UID: "081d0234-b506-49ff-81c9-c535f6e1c588") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.758907 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.758947 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.758959 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.758976 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.758986 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:53Z","lastTransitionTime":"2026-02-03T10:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.861449 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.861479 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.861487 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.861500 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.861509 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:53Z","lastTransitionTime":"2026-02-03T10:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.964121 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.964168 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.964179 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.964194 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.964206 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:53Z","lastTransitionTime":"2026-02-03T10:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.066738 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.066765 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.066775 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.066788 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.066796 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:54Z","lastTransitionTime":"2026-02-03T10:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.199331 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.199360 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.199370 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.199384 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.199394 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:54Z","lastTransitionTime":"2026-02-03T10:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.301811 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.301903 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.301920 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.301938 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.301949 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:54Z","lastTransitionTime":"2026-02-03T10:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.404091 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.404138 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.404150 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.404168 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.404179 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:54Z","lastTransitionTime":"2026-02-03T10:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.502096 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:54 crc kubenswrapper[5010]: E0203 10:03:54.502281 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.505632 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.505660 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.505668 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.505678 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.505688 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:54Z","lastTransitionTime":"2026-02-03T10:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.537686 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 09:46:56.072638831 +0000 UTC Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.608199 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.608255 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.608264 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.608278 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.608287 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:54Z","lastTransitionTime":"2026-02-03T10:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.711435 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.711480 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.711501 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.711524 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.711541 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:54Z","lastTransitionTime":"2026-02-03T10:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.814051 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.814084 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.814093 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.814108 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.814120 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:54Z","lastTransitionTime":"2026-02-03T10:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.916372 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.916454 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.916468 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.916491 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.916506 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:54Z","lastTransitionTime":"2026-02-03T10:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.018797 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.018843 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.018855 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.018874 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.018885 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:55Z","lastTransitionTime":"2026-02-03T10:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.121624 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.121665 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.121677 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.121692 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.121704 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:55Z","lastTransitionTime":"2026-02-03T10:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.223798 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.223868 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.223885 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.223909 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.223928 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:55Z","lastTransitionTime":"2026-02-03T10:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.326610 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.326658 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.326670 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.326685 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.326695 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:55Z","lastTransitionTime":"2026-02-03T10:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.430039 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.430119 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.430132 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.430149 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.430165 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:55Z","lastTransitionTime":"2026-02-03T10:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.501461 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.501461 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.501858 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:55 crc kubenswrapper[5010]: E0203 10:03:55.502062 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:55 crc kubenswrapper[5010]: E0203 10:03:55.502137 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:55 crc kubenswrapper[5010]: E0203 10:03:55.502295 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.532256 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.532297 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.532307 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.532322 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.532331 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:55Z","lastTransitionTime":"2026-02-03T10:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.538323 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 09:59:35.030311233 +0000 UTC Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.635447 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.635483 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.635491 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.635505 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.635515 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:55Z","lastTransitionTime":"2026-02-03T10:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.738814 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.738856 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.738865 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.738882 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.738892 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:55Z","lastTransitionTime":"2026-02-03T10:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.841654 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.841706 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.841718 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.841735 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.841748 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:55Z","lastTransitionTime":"2026-02-03T10:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.944796 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.944830 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.944837 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.944850 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.944860 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:55Z","lastTransitionTime":"2026-02-03T10:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.047491 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.047532 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.047542 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.047559 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.047570 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:56Z","lastTransitionTime":"2026-02-03T10:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.149781 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.149824 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.149836 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.149849 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.149858 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:56Z","lastTransitionTime":"2026-02-03T10:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.252480 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.252527 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.252538 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.252575 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.252587 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:56Z","lastTransitionTime":"2026-02-03T10:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.354984 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.355079 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.355160 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.355302 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.355340 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:56Z","lastTransitionTime":"2026-02-03T10:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.457534 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.457831 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.458076 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.458261 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.458411 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:56Z","lastTransitionTime":"2026-02-03T10:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.501169 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:56 crc kubenswrapper[5010]: E0203 10:03:56.501488 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.539125 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 10:30:11.449634721 +0000 UTC Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.561275 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.561321 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.561337 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.561357 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.561375 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:56Z","lastTransitionTime":"2026-02-03T10:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.663386 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.663436 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.663454 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.663476 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.663494 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:56Z","lastTransitionTime":"2026-02-03T10:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.765496 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.765733 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.765797 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.765907 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.765987 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:56Z","lastTransitionTime":"2026-02-03T10:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.868819 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.868868 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.868882 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.868901 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.868916 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:56Z","lastTransitionTime":"2026-02-03T10:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.970426 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.970466 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.970478 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.970493 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.970502 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:56Z","lastTransitionTime":"2026-02-03T10:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.072843 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.073115 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.073372 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.073693 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.073873 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:57Z","lastTransitionTime":"2026-02-03T10:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.176136 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.176165 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.176174 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.176187 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.176196 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:57Z","lastTransitionTime":"2026-02-03T10:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.277983 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.278033 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.278052 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.278073 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.278086 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:57Z","lastTransitionTime":"2026-02-03T10:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.380011 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.380322 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.380481 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.380586 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.380704 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:57Z","lastTransitionTime":"2026-02-03T10:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.484068 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.484143 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.484155 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.484170 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.484185 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:57Z","lastTransitionTime":"2026-02-03T10:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.501849 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:57 crc kubenswrapper[5010]: E0203 10:03:57.501962 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.502011 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:57 crc kubenswrapper[5010]: E0203 10:03:57.502066 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.502580 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:57 crc kubenswrapper[5010]: E0203 10:03:57.502907 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.540010 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 10:43:06.482694299 +0000 UTC Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.586286 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.586331 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.586340 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.586354 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.586364 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:57Z","lastTransitionTime":"2026-02-03T10:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.688438 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.688483 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.688496 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.688514 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.688528 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:57Z","lastTransitionTime":"2026-02-03T10:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.790720 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.790766 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.790782 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.790803 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.790819 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:57Z","lastTransitionTime":"2026-02-03T10:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.893665 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.893700 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.893707 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.893720 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.893728 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:57Z","lastTransitionTime":"2026-02-03T10:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.996157 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.996432 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.996526 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.996658 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.996752 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:57Z","lastTransitionTime":"2026-02-03T10:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.099097 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.099434 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.099505 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.099583 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.099656 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:58Z","lastTransitionTime":"2026-02-03T10:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.196731 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.196771 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.196780 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.196794 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.196805 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:58Z","lastTransitionTime":"2026-02-03T10:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.221130 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.221168 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.221179 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.221197 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.221232 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:58Z","lastTransitionTime":"2026-02-03T10:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.243885 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2"] Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.244321 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.246857 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.247279 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.247371 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.247371 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.280732 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-89h2z" podStartSLOduration=83.280708569 podStartE2EDuration="1m23.280708569s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:03:58.263449793 +0000 UTC m=+108.419425932" watchObservedRunningTime="2026-02-03 10:03:58.280708569 +0000 UTC m=+108.436684698" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.294970 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-cvpds" podStartSLOduration=83.294950558 podStartE2EDuration="1m23.294950558s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:03:58.281027678 +0000 UTC m=+108.437003817" watchObservedRunningTime="2026-02-03 10:03:58.294950558 +0000 UTC m=+108.450926697" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.347815 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-f5tpq" podStartSLOduration=83.347797316 podStartE2EDuration="1m23.347797316s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:03:58.347248381 +0000 UTC m=+108.503224520" watchObservedRunningTime="2026-02-03 10:03:58.347797316 +0000 UTC m=+108.503773445" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.360299 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podStartSLOduration=83.360278095 podStartE2EDuration="1m23.360278095s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:03:58.360087339 +0000 UTC m=+108.516063478" watchObservedRunningTime="2026-02-03 10:03:58.360278095 +0000 UTC m=+108.516254224" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.375003 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.375054 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.375083 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.375131 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.375191 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.391080 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-7lfkq" podStartSLOduration=83.391059799 podStartE2EDuration="1m23.391059799s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:03:58.390946336 +0000 UTC m=+108.546922485" watchObservedRunningTime="2026-02-03 10:03:58.391059799 +0000 UTC m=+108.547035948" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.416838 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=9.416814729 podStartE2EDuration="9.416814729s" podCreationTimestamp="2026-02-03 10:03:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:03:58.416440898 +0000 UTC m=+108.572417037" watchObservedRunningTime="2026-02-03 10:03:58.416814729 +0000 UTC m=+108.572790858" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.463939 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.463921572 podStartE2EDuration="1m29.463921572s" podCreationTimestamp="2026-02-03 10:02:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:03:58.463879521 +0000 UTC m=+108.619855660" watchObservedRunningTime="2026-02-03 10:03:58.463921572 +0000 UTC m=+108.619897701" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.476349 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.476405 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.476433 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.476490 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.476547 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.476619 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.476664 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.477261 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.481941 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=85.481923619 podStartE2EDuration="1m25.481923619s" podCreationTimestamp="2026-02-03 10:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:03:58.481493867 +0000 UTC m=+108.637469996" watchObservedRunningTime="2026-02-03 10:03:58.481923619 +0000 UTC m=+108.637899748" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.482292 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.493902 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=59.493885053 podStartE2EDuration="59.493885053s" podCreationTimestamp="2026-02-03 10:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:03:58.493526983 +0000 UTC m=+108.649503112" watchObservedRunningTime="2026-02-03 10:03:58.493885053 +0000 UTC m=+108.649861182" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.496599 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.501375 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:58 crc kubenswrapper[5010]: E0203 10:03:58.501511 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.535735 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=37.535714255 podStartE2EDuration="37.535714255s" podCreationTimestamp="2026-02-03 10:03:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:03:58.535232131 +0000 UTC m=+108.691208280" watchObservedRunningTime="2026-02-03 10:03:58.535714255 +0000 UTC m=+108.691690374" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.536172 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" podStartSLOduration=83.536167578 podStartE2EDuration="1m23.536167578s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:03:58.52442644 +0000 UTC m=+108.680402579" watchObservedRunningTime="2026-02-03 10:03:58.536167578 +0000 UTC m=+108.692143707" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.541187 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 16:12:25.677292201 +0000 UTC Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.541277 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.547684 5010 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.562001 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:59 crc kubenswrapper[5010]: I0203 10:03:59.167114 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" event={"ID":"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50","Type":"ContainerStarted","Data":"7b807cb4be28218027fc16855c54c087d9ae8be394606a21c1308e9f78a83a93"} Feb 03 10:03:59 crc kubenswrapper[5010]: I0203 10:03:59.167168 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" event={"ID":"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50","Type":"ContainerStarted","Data":"be38329100afa7716b13b0d201891bde5a0caebc37836d91c2e14cf54d247542"} Feb 03 10:03:59 crc kubenswrapper[5010]: I0203 10:03:59.501244 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:59 crc kubenswrapper[5010]: E0203 10:03:59.501655 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:59 crc kubenswrapper[5010]: I0203 10:03:59.501465 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:59 crc kubenswrapper[5010]: E0203 10:03:59.501745 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:59 crc kubenswrapper[5010]: I0203 10:03:59.501386 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:59 crc kubenswrapper[5010]: E0203 10:03:59.501825 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:00 crc kubenswrapper[5010]: I0203 10:04:00.502110 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:00 crc kubenswrapper[5010]: E0203 10:04:00.503645 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:00 crc kubenswrapper[5010]: I0203 10:04:00.504014 5010 scope.go:117] "RemoveContainer" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" Feb 03 10:04:00 crc kubenswrapper[5010]: E0203 10:04:00.504325 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" Feb 03 10:04:01 crc kubenswrapper[5010]: I0203 10:04:01.502291 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:01 crc kubenswrapper[5010]: I0203 10:04:01.502405 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:01 crc kubenswrapper[5010]: E0203 10:04:01.502516 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:01 crc kubenswrapper[5010]: I0203 10:04:01.502527 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:01 crc kubenswrapper[5010]: E0203 10:04:01.502591 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:01 crc kubenswrapper[5010]: E0203 10:04:01.502865 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:02 crc kubenswrapper[5010]: I0203 10:04:02.501513 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:02 crc kubenswrapper[5010]: E0203 10:04:02.501663 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:03 crc kubenswrapper[5010]: I0203 10:04:03.501691 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:03 crc kubenswrapper[5010]: I0203 10:04:03.501851 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:03 crc kubenswrapper[5010]: I0203 10:04:03.501938 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:03 crc kubenswrapper[5010]: E0203 10:04:03.502082 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:03 crc kubenswrapper[5010]: E0203 10:04:03.502514 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:03 crc kubenswrapper[5010]: E0203 10:04:03.502877 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:04 crc kubenswrapper[5010]: I0203 10:04:04.502199 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:04 crc kubenswrapper[5010]: E0203 10:04:04.502384 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:05 crc kubenswrapper[5010]: I0203 10:04:05.501127 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:05 crc kubenswrapper[5010]: I0203 10:04:05.501147 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:05 crc kubenswrapper[5010]: E0203 10:04:05.501315 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:05 crc kubenswrapper[5010]: E0203 10:04:05.501480 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:05 crc kubenswrapper[5010]: I0203 10:04:05.501765 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:05 crc kubenswrapper[5010]: E0203 10:04:05.502024 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:06 crc kubenswrapper[5010]: I0203 10:04:06.501685 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:06 crc kubenswrapper[5010]: E0203 10:04:06.502513 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:07 crc kubenswrapper[5010]: I0203 10:04:07.502127 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:07 crc kubenswrapper[5010]: I0203 10:04:07.502163 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:07 crc kubenswrapper[5010]: I0203 10:04:07.502206 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:07 crc kubenswrapper[5010]: E0203 10:04:07.502364 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:07 crc kubenswrapper[5010]: E0203 10:04:07.502560 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:07 crc kubenswrapper[5010]: E0203 10:04:07.502750 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:08 crc kubenswrapper[5010]: I0203 10:04:08.501198 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:08 crc kubenswrapper[5010]: E0203 10:04:08.501372 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:09 crc kubenswrapper[5010]: I0203 10:04:09.501916 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:09 crc kubenswrapper[5010]: I0203 10:04:09.501964 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:09 crc kubenswrapper[5010]: I0203 10:04:09.501962 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:09 crc kubenswrapper[5010]: E0203 10:04:09.502131 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:09 crc kubenswrapper[5010]: E0203 10:04:09.502183 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:09 crc kubenswrapper[5010]: E0203 10:04:09.502248 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:10 crc kubenswrapper[5010]: E0203 10:04:10.449185 5010 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 03 10:04:10 crc kubenswrapper[5010]: I0203 10:04:10.501784 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:10 crc kubenswrapper[5010]: E0203 10:04:10.502955 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:10 crc kubenswrapper[5010]: E0203 10:04:10.601124 5010 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 03 10:04:11 crc kubenswrapper[5010]: I0203 10:04:11.206176 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f5tpq_8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef/kube-multus/1.log" Feb 03 10:04:11 crc kubenswrapper[5010]: I0203 10:04:11.207131 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f5tpq_8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef/kube-multus/0.log" Feb 03 10:04:11 crc kubenswrapper[5010]: I0203 10:04:11.207326 5010 generic.go:334] "Generic (PLEG): container finished" podID="8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef" containerID="d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe" exitCode=1 Feb 03 10:04:11 crc kubenswrapper[5010]: I0203 10:04:11.207399 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f5tpq" event={"ID":"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef","Type":"ContainerDied","Data":"d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe"} Feb 03 10:04:11 crc kubenswrapper[5010]: I0203 10:04:11.207483 5010 scope.go:117] "RemoveContainer" containerID="b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a" Feb 03 10:04:11 crc kubenswrapper[5010]: I0203 10:04:11.208327 5010 scope.go:117] "RemoveContainer" containerID="d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe" Feb 03 10:04:11 crc kubenswrapper[5010]: E0203 10:04:11.208803 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-f5tpq_openshift-multus(8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef)\"" pod="openshift-multus/multus-f5tpq" podUID="8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef" Feb 03 10:04:11 crc kubenswrapper[5010]: I0203 10:04:11.235095 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" podStartSLOduration=96.235077852 podStartE2EDuration="1m36.235077852s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:03:59.182642359 +0000 UTC m=+109.338618508" watchObservedRunningTime="2026-02-03 10:04:11.235077852 +0000 UTC m=+121.391053981" Feb 03 10:04:11 crc kubenswrapper[5010]: I0203 10:04:11.502033 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:11 crc kubenswrapper[5010]: I0203 10:04:11.502024 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:11 crc kubenswrapper[5010]: I0203 10:04:11.502108 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:11 crc kubenswrapper[5010]: E0203 10:04:11.502659 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:11 crc kubenswrapper[5010]: E0203 10:04:11.502811 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:11 crc kubenswrapper[5010]: E0203 10:04:11.502898 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:11 crc kubenswrapper[5010]: I0203 10:04:11.503118 5010 scope.go:117] "RemoveContainer" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" Feb 03 10:04:11 crc kubenswrapper[5010]: E0203 10:04:11.503377 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" Feb 03 10:04:12 crc kubenswrapper[5010]: I0203 10:04:12.213134 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f5tpq_8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef/kube-multus/1.log" Feb 03 10:04:12 crc kubenswrapper[5010]: I0203 10:04:12.501290 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:12 crc kubenswrapper[5010]: E0203 10:04:12.501479 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:13 crc kubenswrapper[5010]: I0203 10:04:13.501784 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:13 crc kubenswrapper[5010]: E0203 10:04:13.502129 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:13 crc kubenswrapper[5010]: I0203 10:04:13.501784 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:13 crc kubenswrapper[5010]: I0203 10:04:13.501784 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:13 crc kubenswrapper[5010]: E0203 10:04:13.502195 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:13 crc kubenswrapper[5010]: E0203 10:04:13.502375 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:14 crc kubenswrapper[5010]: I0203 10:04:14.502171 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:14 crc kubenswrapper[5010]: E0203 10:04:14.502476 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:15 crc kubenswrapper[5010]: I0203 10:04:15.501554 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:15 crc kubenswrapper[5010]: I0203 10:04:15.501607 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:15 crc kubenswrapper[5010]: E0203 10:04:15.501684 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:15 crc kubenswrapper[5010]: E0203 10:04:15.501878 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:15 crc kubenswrapper[5010]: I0203 10:04:15.502123 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:15 crc kubenswrapper[5010]: E0203 10:04:15.502392 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:15 crc kubenswrapper[5010]: E0203 10:04:15.602863 5010 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 03 10:04:16 crc kubenswrapper[5010]: I0203 10:04:16.501728 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:16 crc kubenswrapper[5010]: E0203 10:04:16.501860 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:17 crc kubenswrapper[5010]: I0203 10:04:17.501961 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:17 crc kubenswrapper[5010]: I0203 10:04:17.502013 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:17 crc kubenswrapper[5010]: I0203 10:04:17.501965 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:17 crc kubenswrapper[5010]: E0203 10:04:17.502161 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:17 crc kubenswrapper[5010]: E0203 10:04:17.502339 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:17 crc kubenswrapper[5010]: E0203 10:04:17.502398 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:18 crc kubenswrapper[5010]: I0203 10:04:18.502037 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:18 crc kubenswrapper[5010]: E0203 10:04:18.502153 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:19 crc kubenswrapper[5010]: I0203 10:04:19.501688 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:19 crc kubenswrapper[5010]: I0203 10:04:19.501812 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:19 crc kubenswrapper[5010]: E0203 10:04:19.501909 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:19 crc kubenswrapper[5010]: I0203 10:04:19.501957 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:19 crc kubenswrapper[5010]: E0203 10:04:19.502004 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:19 crc kubenswrapper[5010]: E0203 10:04:19.502067 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:20 crc kubenswrapper[5010]: I0203 10:04:20.502026 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:20 crc kubenswrapper[5010]: E0203 10:04:20.503091 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:20 crc kubenswrapper[5010]: E0203 10:04:20.603622 5010 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 03 10:04:21 crc kubenswrapper[5010]: I0203 10:04:21.501953 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:21 crc kubenswrapper[5010]: I0203 10:04:21.502019 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:21 crc kubenswrapper[5010]: I0203 10:04:21.502106 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:21 crc kubenswrapper[5010]: E0203 10:04:21.502154 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:21 crc kubenswrapper[5010]: E0203 10:04:21.502387 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:21 crc kubenswrapper[5010]: E0203 10:04:21.502490 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:22 crc kubenswrapper[5010]: I0203 10:04:22.501618 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:22 crc kubenswrapper[5010]: E0203 10:04:22.501733 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:23 crc kubenswrapper[5010]: I0203 10:04:23.502080 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:23 crc kubenswrapper[5010]: I0203 10:04:23.502115 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:23 crc kubenswrapper[5010]: E0203 10:04:23.502368 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:23 crc kubenswrapper[5010]: I0203 10:04:23.502408 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:23 crc kubenswrapper[5010]: E0203 10:04:23.503045 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:23 crc kubenswrapper[5010]: E0203 10:04:23.503168 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:23 crc kubenswrapper[5010]: I0203 10:04:23.503615 5010 scope.go:117] "RemoveContainer" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" Feb 03 10:04:24 crc kubenswrapper[5010]: I0203 10:04:24.254842 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/3.log" Feb 03 10:04:24 crc kubenswrapper[5010]: I0203 10:04:24.259478 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a"} Feb 03 10:04:24 crc kubenswrapper[5010]: I0203 10:04:24.260450 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:04:24 crc kubenswrapper[5010]: I0203 10:04:24.290197 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podStartSLOduration=109.290175377 podStartE2EDuration="1m49.290175377s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:24.289360244 +0000 UTC m=+134.445336383" watchObservedRunningTime="2026-02-03 10:04:24.290175377 +0000 UTC m=+134.446151516" Feb 03 10:04:24 crc kubenswrapper[5010]: I0203 10:04:24.307201 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-clvdz"] Feb 03 10:04:24 crc kubenswrapper[5010]: I0203 10:04:24.307329 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:24 crc kubenswrapper[5010]: E0203 10:04:24.307433 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:24 crc kubenswrapper[5010]: I0203 10:04:24.503501 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:24 crc kubenswrapper[5010]: E0203 10:04:24.503696 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:24 crc kubenswrapper[5010]: I0203 10:04:24.503820 5010 scope.go:117] "RemoveContainer" containerID="d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe" Feb 03 10:04:25 crc kubenswrapper[5010]: I0203 10:04:25.264448 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f5tpq_8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef/kube-multus/1.log" Feb 03 10:04:25 crc kubenswrapper[5010]: I0203 10:04:25.264799 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f5tpq" event={"ID":"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef","Type":"ContainerStarted","Data":"350b279aaf7efa7dad21bc0c20fa082b7c655a83b208a5091e614ce3efe34ce4"} Feb 03 10:04:25 crc kubenswrapper[5010]: I0203 10:04:25.501415 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:25 crc kubenswrapper[5010]: I0203 10:04:25.501479 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:25 crc kubenswrapper[5010]: E0203 10:04:25.501856 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:25 crc kubenswrapper[5010]: E0203 10:04:25.501861 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:25 crc kubenswrapper[5010]: E0203 10:04:25.604909 5010 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 03 10:04:26 crc kubenswrapper[5010]: I0203 10:04:26.501200 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:26 crc kubenswrapper[5010]: E0203 10:04:26.501588 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:26 crc kubenswrapper[5010]: I0203 10:04:26.501249 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:26 crc kubenswrapper[5010]: E0203 10:04:26.501693 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:27 crc kubenswrapper[5010]: I0203 10:04:27.501123 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:27 crc kubenswrapper[5010]: I0203 10:04:27.501195 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:27 crc kubenswrapper[5010]: E0203 10:04:27.501324 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:27 crc kubenswrapper[5010]: E0203 10:04:27.501449 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:28 crc kubenswrapper[5010]: I0203 10:04:28.502048 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:28 crc kubenswrapper[5010]: I0203 10:04:28.502096 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:28 crc kubenswrapper[5010]: E0203 10:04:28.502205 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:28 crc kubenswrapper[5010]: E0203 10:04:28.502362 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:29 crc kubenswrapper[5010]: I0203 10:04:29.502027 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:29 crc kubenswrapper[5010]: I0203 10:04:29.502038 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:29 crc kubenswrapper[5010]: E0203 10:04:29.502246 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:29 crc kubenswrapper[5010]: E0203 10:04:29.502302 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:30 crc kubenswrapper[5010]: I0203 10:04:30.503035 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:30 crc kubenswrapper[5010]: I0203 10:04:30.503050 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:30 crc kubenswrapper[5010]: E0203 10:04:30.506801 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:30 crc kubenswrapper[5010]: E0203 10:04:30.506942 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:31 crc kubenswrapper[5010]: I0203 10:04:31.502010 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:31 crc kubenswrapper[5010]: I0203 10:04:31.502088 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:31 crc kubenswrapper[5010]: I0203 10:04:31.505531 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 03 10:04:31 crc kubenswrapper[5010]: I0203 10:04:31.505724 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 03 10:04:32 crc kubenswrapper[5010]: I0203 10:04:32.502656 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:32 crc kubenswrapper[5010]: I0203 10:04:32.502674 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:32 crc kubenswrapper[5010]: I0203 10:04:32.504752 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 03 10:04:32 crc kubenswrapper[5010]: I0203 10:04:32.505560 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 03 10:04:32 crc kubenswrapper[5010]: I0203 10:04:32.505622 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 03 10:04:32 crc kubenswrapper[5010]: I0203 10:04:32.506948 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 03 10:04:37 crc kubenswrapper[5010]: I0203 10:04:37.380520 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:37 crc kubenswrapper[5010]: E0203 10:04:37.380704 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:06:39.38067189 +0000 UTC m=+269.536648029 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:37 crc kubenswrapper[5010]: I0203 10:04:37.380849 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:37 crc kubenswrapper[5010]: I0203 10:04:37.380884 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:37 crc kubenswrapper[5010]: I0203 10:04:37.381996 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:37 crc kubenswrapper[5010]: I0203 10:04:37.390241 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:37 crc kubenswrapper[5010]: I0203 10:04:37.481577 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:37 crc kubenswrapper[5010]: I0203 10:04:37.481635 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:37 crc kubenswrapper[5010]: I0203 10:04:37.485060 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:37 crc kubenswrapper[5010]: I0203 10:04:37.485989 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:37 crc kubenswrapper[5010]: I0203 10:04:37.516845 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:37 crc kubenswrapper[5010]: I0203 10:04:37.523945 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:37 crc kubenswrapper[5010]: I0203 10:04:37.622978 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:37 crc kubenswrapper[5010]: W0203 10:04:37.816553 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-5e1a7306731dd81d301834454f60668151e902006b4113f2287a12ec90905189 WatchSource:0}: Error finding container 5e1a7306731dd81d301834454f60668151e902006b4113f2287a12ec90905189: Status 404 returned error can't find the container with id 5e1a7306731dd81d301834454f60668151e902006b4113f2287a12ec90905189 Feb 03 10:04:38 crc kubenswrapper[5010]: I0203 10:04:38.303987 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3413bfbed34b65e745726b9346066c38fd2609458111021ec8f48d5f4b46a753"} Feb 03 10:04:38 crc kubenswrapper[5010]: I0203 10:04:38.304049 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5e1a7306731dd81d301834454f60668151e902006b4113f2287a12ec90905189"} Feb 03 10:04:38 crc kubenswrapper[5010]: I0203 10:04:38.310514 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0376cc375a0e1e8c69dd83f5dd576d65d1cf311b80f2b866b444b1e0575da47d"} Feb 03 10:04:38 crc kubenswrapper[5010]: I0203 10:04:38.310567 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bc517f5913017e8b7d1def57ce7587beb16dbbf0da5f1d454399fb8949116309"} Feb 03 10:04:38 crc kubenswrapper[5010]: I0203 10:04:38.311639 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b1fa09b9e7974cb2dcc26ee6df62c655a70c382f980a0b20d974477d4a1ec12a"} Feb 03 10:04:38 crc kubenswrapper[5010]: I0203 10:04:38.311671 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"56efd723615985c2b4f0ba50cd95709e1b969ff835681c0261c48845a408dc40"} Feb 03 10:04:38 crc kubenswrapper[5010]: I0203 10:04:38.311842 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.336113 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.369076 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lc7dd"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.369581 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.370036 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9lvbs"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.370730 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.371045 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.371657 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.371863 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.371923 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.372227 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.372306 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.372629 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.372797 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.377966 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.380403 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.380422 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.380439 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.381249 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.381265 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.381284 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.381409 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.381532 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.381838 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.381911 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.381933 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.381990 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.381997 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.382194 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.382296 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.382355 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.382430 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.382470 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.384644 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.384817 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.384955 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.393568 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.394363 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.395301 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-wtcpj"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.395741 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.397400 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.397841 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.398360 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rkqd6"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.398973 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.403764 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61153282-2bd6-4bbf-a04a-76909b13f961-client-ca\") pod \"route-controller-manager-6576b87f9c-qgmq6\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.403827 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23cdf53e-881f-4cf2-b557-e087a017b7ec-config\") pod \"machine-approver-56656f9798-sk5mk\" (UID: \"23cdf53e-881f-4cf2-b557-e087a017b7ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.403995 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.404055 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/23cdf53e-881f-4cf2-b557-e087a017b7ec-machine-approver-tls\") pod \"machine-approver-56656f9798-sk5mk\" (UID: \"23cdf53e-881f-4cf2-b557-e087a017b7ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.404099 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzqxj\" (UniqueName: \"kubernetes.io/projected/61153282-2bd6-4bbf-a04a-76909b13f961-kube-api-access-wzqxj\") pod \"route-controller-manager-6576b87f9c-qgmq6\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.405208 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5mq4r"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.405364 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cf586c8c-c859-44a2-9b28-16708745cda1-etcd-client\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.405457 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61153282-2bd6-4bbf-a04a-76909b13f961-serving-cert\") pod \"route-controller-manager-6576b87f9c-qgmq6\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.405576 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cf586c8c-c859-44a2-9b28-16708745cda1-audit-dir\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.405612 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/23cdf53e-881f-4cf2-b557-e087a017b7ec-auth-proxy-config\") pod \"machine-approver-56656f9798-sk5mk\" (UID: \"23cdf53e-881f-4cf2-b557-e087a017b7ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.405655 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsc2k\" (UniqueName: \"kubernetes.io/projected/23cdf53e-881f-4cf2-b557-e087a017b7ec-kube-api-access-nsc2k\") pod \"machine-approver-56656f9798-sk5mk\" (UID: \"23cdf53e-881f-4cf2-b557-e087a017b7ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.405689 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e27ae235-3c1c-4ee0-85b6-a53477e335e5-serving-cert\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.405760 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-client-ca\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.405784 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.405925 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-config\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.405949 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cf586c8c-c859-44a2-9b28-16708745cda1-node-pullsecrets\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.406171 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-image-import-ca\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.406189 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf586c8c-c859-44a2-9b28-16708745cda1-serving-cert\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.406411 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cf586c8c-c859-44a2-9b28-16708745cda1-encryption-config\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.406470 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzx2n\" (UniqueName: \"kubernetes.io/projected/e27ae235-3c1c-4ee0-85b6-a53477e335e5-kube-api-access-lzx2n\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.406640 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-audit\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.406681 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61153282-2bd6-4bbf-a04a-76909b13f961-config\") pod \"route-controller-manager-6576b87f9c-qgmq6\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.406701 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d7m8\" (UniqueName: \"kubernetes.io/projected/cf586c8c-c859-44a2-9b28-16708745cda1-kube-api-access-7d7m8\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.406734 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-config\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.406780 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-etcd-serving-ca\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.407577 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7ztl2"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.407799 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.410258 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.413045 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-7ztl2" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.432191 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.433071 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6t4bv"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.433383 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.433802 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.434704 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.434961 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.435263 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.435483 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.436029 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.436118 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.436269 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.436302 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.436738 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.438289 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.438459 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.438508 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bkdmn"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.438568 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.438660 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.438740 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.438817 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.438886 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 03 10:04:39 crc kubenswrapper[5010]: W0203 10:04:39.438960 5010 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: configmaps "machine-api-operator-images" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 03 10:04:39 crc kubenswrapper[5010]: E0203 10:04:39.438990 5010 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-api-operator-images\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.439059 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.439931 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lc7dd"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.439977 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.439999 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440025 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440099 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440163 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440176 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440205 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440291 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440375 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440451 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440550 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440625 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440679 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440749 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440808 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.441066 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.441178 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.441193 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.441235 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.441312 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.441363 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.441387 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.441567 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.442376 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.443931 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.445248 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.449334 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.450617 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.450963 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.451394 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.454610 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.455957 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.461783 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.462129 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.462151 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.462426 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.462462 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.473499 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.474435 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x857s"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.475178 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.479554 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.481627 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.481866 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.481898 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.485899 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.489306 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.496694 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-jvtp4"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.497276 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.497471 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jvtp4" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.508896 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.509165 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.509298 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.509476 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.509688 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.509699 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510358 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510585 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61153282-2bd6-4bbf-a04a-76909b13f961-config\") pod \"route-controller-manager-6576b87f9c-qgmq6\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510614 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d7m8\" (UniqueName: \"kubernetes.io/projected/cf586c8c-c859-44a2-9b28-16708745cda1-kube-api-access-7d7m8\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510643 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/594e9304-c63f-4d73-bcad-5258c1ebdd6d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510670 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ad56317f-8d37-4d59-9abe-346b4340a30c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8qfbt\" (UID: \"ad56317f-8d37-4d59-9abe-346b4340a30c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510694 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5a475011-4dc0-4490-829a-8016f3b0e8a2-audit-dir\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510715 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2e96179c-7517-40d5-918f-1fc379e16fec-etcd-client\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510738 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510760 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f59fb23-ca1e-487d-a345-9eada8d1c7a8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bd2tr\" (UID: \"8f59fb23-ca1e-487d-a345-9eada8d1c7a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510780 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510799 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/291724bc-0382-45d5-a089-356f8e04feb5-config\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510819 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510850 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-config\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510873 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-serving-cert\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510895 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510915 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510936 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s54b\" (UniqueName: \"kubernetes.io/projected/291724bc-0382-45d5-a089-356f8e04feb5-kube-api-access-8s54b\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510956 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-registry-tls\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510980 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-etcd-serving-ca\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511001 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-config\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511022 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e96179c-7517-40d5-918f-1fc379e16fec-serving-cert\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511053 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511074 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61153282-2bd6-4bbf-a04a-76909b13f961-client-ca\") pod \"route-controller-manager-6576b87f9c-qgmq6\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511094 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23cdf53e-881f-4cf2-b557-e087a017b7ec-config\") pod \"machine-approver-56656f9798-sk5mk\" (UID: \"23cdf53e-881f-4cf2-b557-e087a017b7ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511121 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511143 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2e96179c-7517-40d5-918f-1fc379e16fec-etcd-ca\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511167 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511192 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/23cdf53e-881f-4cf2-b557-e087a017b7ec-machine-approver-tls\") pod \"machine-approver-56656f9798-sk5mk\" (UID: \"23cdf53e-881f-4cf2-b557-e087a017b7ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511235 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511259 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2e96179c-7517-40d5-918f-1fc379e16fec-etcd-service-ca\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511278 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511489 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511977 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.512049 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-etcd-serving-ca\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.512278 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61153282-2bd6-4bbf-a04a-76909b13f961-config\") pod \"route-controller-manager-6576b87f9c-qgmq6\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.512328 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/594e9304-c63f-4d73-bcad-5258c1ebdd6d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: E0203 10:04:39.512350 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.012337716 +0000 UTC m=+150.168313935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.512440 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.512473 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzqxj\" (UniqueName: \"kubernetes.io/projected/61153282-2bd6-4bbf-a04a-76909b13f961-kube-api-access-wzqxj\") pod \"route-controller-manager-6576b87f9c-qgmq6\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.512520 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cf586c8c-c859-44a2-9b28-16708745cda1-etcd-client\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.512546 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-images\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.512571 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v69f4\" (UniqueName: \"kubernetes.io/projected/2e96179c-7517-40d5-918f-1fc379e16fec-kube-api-access-v69f4\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.512584 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.512594 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/291724bc-0382-45d5-a089-356f8e04feb5-service-ca-bundle\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.512616 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e96179c-7517-40d5-918f-1fc379e16fec-config\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.513010 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.513260 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.513414 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61153282-2bd6-4bbf-a04a-76909b13f961-client-ca\") pod \"route-controller-manager-6576b87f9c-qgmq6\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.513793 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61153282-2bd6-4bbf-a04a-76909b13f961-serving-cert\") pod \"route-controller-manager-6576b87f9c-qgmq6\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.513836 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f59fb23-ca1e-487d-a345-9eada8d1c7a8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bd2tr\" (UID: \"8f59fb23-ca1e-487d-a345-9eada8d1c7a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.513863 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/291724bc-0382-45d5-a089-356f8e04feb5-serving-cert\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.513792 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.513989 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-oauth-serving-cert\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.514207 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cf586c8c-c859-44a2-9b28-16708745cda1-audit-dir\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.514283 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-config\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.514612 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23cdf53e-881f-4cf2-b557-e087a017b7ec-config\") pod \"machine-approver-56656f9798-sk5mk\" (UID: \"23cdf53e-881f-4cf2-b557-e087a017b7ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.514797 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.515073 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.515457 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.515461 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.515710 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.515957 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cf586c8c-c859-44a2-9b28-16708745cda1-audit-dir\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.515997 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/23cdf53e-881f-4cf2-b557-e087a017b7ec-auth-proxy-config\") pod \"machine-approver-56656f9798-sk5mk\" (UID: \"23cdf53e-881f-4cf2-b557-e087a017b7ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516053 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc6wt\" (UniqueName: \"kubernetes.io/projected/45194a2a-320c-439d-9070-2c534070b7e4-kube-api-access-dc6wt\") pod \"dns-operator-744455d44c-7ztl2\" (UID: \"45194a2a-320c-439d-9070-2c534070b7e4\") " pod="openshift-dns-operator/dns-operator-744455d44c-7ztl2" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516235 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516663 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/23cdf53e-881f-4cf2-b557-e087a017b7ec-auth-proxy-config\") pod \"machine-approver-56656f9798-sk5mk\" (UID: \"23cdf53e-881f-4cf2-b557-e087a017b7ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516722 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516732 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfwvg\" (UniqueName: \"kubernetes.io/projected/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-kube-api-access-kfwvg\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516762 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/594e9304-c63f-4d73-bcad-5258c1ebdd6d-trusted-ca\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516809 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsc2k\" (UniqueName: \"kubernetes.io/projected/23cdf53e-881f-4cf2-b557-e087a017b7ec-kube-api-access-nsc2k\") pod \"machine-approver-56656f9798-sk5mk\" (UID: \"23cdf53e-881f-4cf2-b557-e087a017b7ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516858 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516886 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk877\" (UniqueName: \"kubernetes.io/projected/8f59fb23-ca1e-487d-a345-9eada8d1c7a8-kube-api-access-fk877\") pod \"cluster-image-registry-operator-dc59b4c8b-bd2tr\" (UID: \"8f59fb23-ca1e-487d-a345-9eada8d1c7a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516916 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e27ae235-3c1c-4ee0-85b6-a53477e335e5-serving-cert\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516941 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45194a2a-320c-439d-9070-2c534070b7e4-metrics-tls\") pod \"dns-operator-744455d44c-7ztl2\" (UID: \"45194a2a-320c-439d-9070-2c534070b7e4\") " pod="openshift-dns-operator/dns-operator-744455d44c-7ztl2" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516964 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/291724bc-0382-45d5-a089-356f8e04feb5-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516987 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-client-ca\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.517013 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.517055 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgh4v\" (UniqueName: \"kubernetes.io/projected/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-kube-api-access-dgh4v\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.517086 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-config\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.517133 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/594e9304-c63f-4d73-bcad-5258c1ebdd6d-registry-certificates\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.517161 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.517185 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-bound-sa-token\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.518826 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-client-ca\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.519923 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-config\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.519960 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.520004 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.520105 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-service-ca\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.520107 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-config\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.520268 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e27ae235-3c1c-4ee0-85b6-a53477e335e5-serving-cert\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.521755 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cf586c8c-c859-44a2-9b28-16708745cda1-etcd-client\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.520139 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-trusted-ca-bundle\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.521831 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cf586c8c-c859-44a2-9b28-16708745cda1-node-pullsecrets\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.521850 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf586c8c-c859-44a2-9b28-16708745cda1-serving-cert\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.521866 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cf586c8c-c859-44a2-9b28-16708745cda1-encryption-config\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.521893 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzx2n\" (UniqueName: \"kubernetes.io/projected/e27ae235-3c1c-4ee0-85b6-a53477e335e5-kube-api-access-lzx2n\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.521915 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-image-import-ca\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.521944 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.521966 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwhnr\" (UniqueName: \"kubernetes.io/projected/5a475011-4dc0-4490-829a-8016f3b0e8a2-kube-api-access-vwhnr\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.521982 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f59fb23-ca1e-487d-a345-9eada8d1c7a8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bd2tr\" (UID: \"8f59fb23-ca1e-487d-a345-9eada8d1c7a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.521999 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-audit\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.522020 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-audit-policies\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.522033 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-oauth-config\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.522050 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqkpg\" (UniqueName: \"kubernetes.io/projected/ad56317f-8d37-4d59-9abe-346b4340a30c-kube-api-access-lqkpg\") pod \"cluster-samples-operator-665b6dd947-8qfbt\" (UID: \"ad56317f-8d37-4d59-9abe-346b4340a30c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.522067 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf8k7\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-kube-api-access-mf8k7\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.524000 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.524113 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ljpd5"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.521912 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61153282-2bd6-4bbf-a04a-76909b13f961-serving-cert\") pod \"route-controller-manager-6576b87f9c-qgmq6\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.524353 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/23cdf53e-881f-4cf2-b557-e087a017b7ec-machine-approver-tls\") pod \"machine-approver-56656f9798-sk5mk\" (UID: \"23cdf53e-881f-4cf2-b557-e087a017b7ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.524517 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-audit\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.524648 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cf586c8c-c859-44a2-9b28-16708745cda1-encryption-config\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.524824 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-image-import-ca\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.524848 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-x7hq6"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.524919 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.523955 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cf586c8c-c859-44a2-9b28-16708745cda1-node-pullsecrets\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.525876 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-x7hq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.526367 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.526848 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf586c8c-c859-44a2-9b28-16708745cda1-serving-cert\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.526931 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.527117 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-whpdl"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.527529 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.527899 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.528256 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.528760 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.529153 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.529652 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.530176 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.530907 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.531391 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-j4pcf"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.531399 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.532267 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.532338 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4pcf" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.533361 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.533466 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.534160 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.534506 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.534656 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.534857 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-c9t7q"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.535182 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.536727 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.539699 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.540459 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.540703 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.540877 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.543315 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.544065 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.544593 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.544924 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.545012 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6kg4f"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.545799 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.547461 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5mq4r"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.549829 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.549870 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.554633 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rkqd6"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.564266 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.569278 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jvtp4"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.599279 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.599340 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-wtcpj"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.601936 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.603166 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.606980 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzqxj\" (UniqueName: \"kubernetes.io/projected/61153282-2bd6-4bbf-a04a-76909b13f961-kube-api-access-wzqxj\") pod \"route-controller-manager-6576b87f9c-qgmq6\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.607504 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-77jcb"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.608322 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.608747 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d7m8\" (UniqueName: \"kubernetes.io/projected/cf586c8c-c859-44a2-9b28-16708745cda1-kube-api-access-7d7m8\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.609021 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.610165 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.611165 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ljpd5"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.612183 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x857s"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.613149 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.617045 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7ztl2"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.618510 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.619444 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.623230 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624198 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624454 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec11c4de-b7ae-4b50-ab95-20be670ab6e8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fs75k\" (UID: \"ec11c4de-b7ae-4b50-ab95-20be670ab6e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624493 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-serving-cert\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624514 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b693a4b6-8aa6-489e-a797-fa486eab7443-apiservice-cert\") pod \"packageserver-d55dfcdfc-5v56r\" (UID: \"b693a4b6-8aa6-489e-a797-fa486eab7443\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624533 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624551 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b8cbffa-cf1a-4658-bd1b-7e7323449bf3-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zwvcg\" (UID: \"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624570 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6kg4f\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624594 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/433ae711-459e-4627-83c1-0fecfe929c60-audit-dir\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624612 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cde7673b-c4b1-4060-86cd-cac7120de9bf-bound-sa-token\") pod \"ingress-operator-5b745b69d9-b78vw\" (UID: \"cde7673b-c4b1-4060-86cd-cac7120de9bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624630 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/df4fd08a-dcc8-4d5c-95ad-9a3542df3233-srv-cert\") pod \"olm-operator-6b444d44fb-sgfk5\" (UID: \"df4fd08a-dcc8-4d5c-95ad-9a3542df3233\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624647 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d-config\") pod \"kube-apiserver-operator-766d6c64bb-zhrgt\" (UID: \"f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624667 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d5tz\" (UniqueName: \"kubernetes.io/projected/d8101cd0-5430-4786-bf8a-3d9c60ad1f7d-kube-api-access-5d5tz\") pod \"downloads-7954f5f757-jvtp4\" (UID: \"d8101cd0-5430-4786-bf8a-3d9c60ad1f7d\") " pod="openshift-console/downloads-7954f5f757-jvtp4" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624683 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/df4fd08a-dcc8-4d5c-95ad-9a3542df3233-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sgfk5\" (UID: \"df4fd08a-dcc8-4d5c-95ad-9a3542df3233\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624698 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/effb39d8-ef30-45f3-bf93-b9dbb8de2475-config\") pod \"kube-controller-manager-operator-78b949d7b-2nxxl\" (UID: \"effb39d8-ef30-45f3-bf93-b9dbb8de2475\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624713 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624729 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624744 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1b8cbffa-cf1a-4658-bd1b-7e7323449bf3-images\") pod \"machine-config-operator-74547568cd-zwvcg\" (UID: \"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624788 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/291724bc-0382-45d5-a089-356f8e04feb5-service-ca-bundle\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624804 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e96179c-7517-40d5-918f-1fc379e16fec-config\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624820 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b075f5c7-f95f-4883-8d94-d1b64bc3c451-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxlln\" (UID: \"b075f5c7-f95f-4883-8d94-d1b64bc3c451\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624835 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdntk\" (UniqueName: \"kubernetes.io/projected/4da6d2c9-755f-44e5-bab0-37cf60ee8378-kube-api-access-gdntk\") pod \"console-operator-58897d9998-ljpd5\" (UID: \"4da6d2c9-755f-44e5-bab0-37cf60ee8378\") " pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624852 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c6x9\" (UniqueName: \"kubernetes.io/projected/ba766e4c-056f-4be6-a4b9-05592b641f87-kube-api-access-8c6x9\") pod \"control-plane-machine-set-operator-78cbb6b69f-xcpwg\" (UID: \"ba766e4c-056f-4be6-a4b9-05592b641f87\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624867 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-oauth-serving-cert\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624882 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/51fcb019-af4d-4f3d-b1b0-4b4e6761db7c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cp6s5\" (UID: \"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624896 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9fed3a51-8c05-46a7-8057-6839f70b2f22-certs\") pod \"machine-config-server-77jcb\" (UID: \"9fed3a51-8c05-46a7-8057-6839f70b2f22\") " pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624912 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-config\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624927 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b075f5c7-f95f-4883-8d94-d1b64bc3c451-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxlln\" (UID: \"b075f5c7-f95f-4883-8d94-d1b64bc3c451\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624944 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk877\" (UniqueName: \"kubernetes.io/projected/8f59fb23-ca1e-487d-a345-9eada8d1c7a8-kube-api-access-fk877\") pod \"cluster-image-registry-operator-dc59b4c8b-bd2tr\" (UID: \"8f59fb23-ca1e-487d-a345-9eada8d1c7a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624959 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/594e9304-c63f-4d73-bcad-5258c1ebdd6d-trusted-ca\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624976 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-signing-cabundle\") pod \"service-ca-9c57cc56f-c9t7q\" (UID: \"d882e1bb-7ece-45ea-9e5e-0d23f162f06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624998 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.625016 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgh4v\" (UniqueName: \"kubernetes.io/projected/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-kube-api-access-dgh4v\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.625032 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ba766e4c-056f-4be6-a4b9-05592b641f87-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xcpwg\" (UID: \"ba766e4c-056f-4be6-a4b9-05592b641f87\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.625052 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-bound-sa-token\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.625068 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8lhm\" (UniqueName: \"kubernetes.io/projected/c07afc79-e943-4e79-93ed-8eedd0ade1bc-kube-api-access-q8lhm\") pod \"multus-admission-controller-857f4d67dd-x7hq6\" (UID: \"c07afc79-e943-4e79-93ed-8eedd0ade1bc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-x7hq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.625084 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-trusted-ca-bundle\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.625099 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gskkj\" (UniqueName: \"kubernetes.io/projected/2f2ac3f6-ed20-4205-9dfd-ce6d76269c26-kube-api-access-gskkj\") pod \"machine-config-controller-84d6567774-bh4wr\" (UID: \"2f2ac3f6-ed20-4205-9dfd-ce6d76269c26\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.625115 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfsz9\" (UniqueName: \"kubernetes.io/projected/9b9c4aab-790c-4581-bfc2-ad1d7302c704-kube-api-access-qfsz9\") pod \"collect-profiles-29501880-x6pjp\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.625912 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58ae0ba7-4454-4bec-87ac-432b346ee643-service-ca-bundle\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.625928 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f2ac3f6-ed20-4205-9dfd-ce6d76269c26-proxy-tls\") pod \"machine-config-controller-84d6567774-bh4wr\" (UID: \"2f2ac3f6-ed20-4205-9dfd-ce6d76269c26\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.625952 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1b8cbffa-cf1a-4658-bd1b-7e7323449bf3-proxy-tls\") pod \"machine-config-operator-74547568cd-zwvcg\" (UID: \"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.633282 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/291724bc-0382-45d5-a089-356f8e04feb5-service-ca-bundle\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.627579 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bkdmn"] Feb 03 10:04:39 crc kubenswrapper[5010]: E0203 10:04:39.628458 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.128430916 +0000 UTC m=+150.284407045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.633342 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.633362 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.633376 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.632883 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/594e9304-c63f-4d73-bcad-5258c1ebdd6d-trusted-ca\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.630801 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.632057 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-config\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.632413 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-oauth-serving-cert\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.631315 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-serving-cert\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.633818 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.634337 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec11c4de-b7ae-4b50-ab95-20be670ab6e8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fs75k\" (UID: \"ec11c4de-b7ae-4b50-ab95-20be670ab6e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.634497 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-c9t7q"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.634526 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/433ae711-459e-4627-83c1-0fecfe929c60-serving-cert\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.634620 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/58ae0ba7-4454-4bec-87ac-432b346ee643-stats-auth\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.634724 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwhnr\" (UniqueName: \"kubernetes.io/projected/5a475011-4dc0-4490-829a-8016f3b0e8a2-kube-api-access-vwhnr\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635113 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxl5b\" (UniqueName: \"kubernetes.io/projected/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-kube-api-access-nxl5b\") pod \"service-ca-9c57cc56f-c9t7q\" (UID: \"d882e1bb-7ece-45ea-9e5e-0d23f162f06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635157 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77bnx\" (UniqueName: \"kubernetes.io/projected/98d0bd22-70a8-4496-9074-3251c15e5b59-kube-api-access-77bnx\") pod \"openshift-controller-manager-operator-756b6f6bc6-m76db\" (UID: \"98d0bd22-70a8-4496-9074-3251c15e5b59\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635204 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-audit-policies\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635255 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6kg4f\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635278 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv6sx\" (UniqueName: \"kubernetes.io/projected/9cddf065-d958-4bf4-b5a8-67321cba2f67-kube-api-access-tv6sx\") pod \"catalog-operator-68c6474976-65mrf\" (UID: \"9cddf065-d958-4bf4-b5a8-67321cba2f67\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635326 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b693a4b6-8aa6-489e-a797-fa486eab7443-webhook-cert\") pod \"packageserver-d55dfcdfc-5v56r\" (UID: \"b693a4b6-8aa6-489e-a797-fa486eab7443\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635349 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b075f5c7-f95f-4883-8d94-d1b64bc3c451-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxlln\" (UID: \"b075f5c7-f95f-4883-8d94-d1b64bc3c451\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635372 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/433ae711-459e-4627-83c1-0fecfe929c60-encryption-config\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635415 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4da6d2c9-755f-44e5-bab0-37cf60ee8378-trusted-ca\") pod \"console-operator-58897d9998-ljpd5\" (UID: \"4da6d2c9-755f-44e5-bab0-37cf60ee8378\") " pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635440 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4da6d2c9-755f-44e5-bab0-37cf60ee8378-serving-cert\") pod \"console-operator-58897d9998-ljpd5\" (UID: \"4da6d2c9-755f-44e5-bab0-37cf60ee8378\") " pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635509 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b9c4aab-790c-4581-bfc2-ad1d7302c704-secret-volume\") pod \"collect-profiles-29501880-x6pjp\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635553 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdssv\" (UniqueName: \"kubernetes.io/projected/58ae0ba7-4454-4bec-87ac-432b346ee643-kube-api-access-pdssv\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635581 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635603 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f59fb23-ca1e-487d-a345-9eada8d1c7a8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bd2tr\" (UID: \"8f59fb23-ca1e-487d-a345-9eada8d1c7a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635650 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/433ae711-459e-4627-83c1-0fecfe929c60-audit-policies\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635676 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b9c4aab-790c-4581-bfc2-ad1d7302c704-config-volume\") pod \"collect-profiles-29501880-x6pjp\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635720 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmnts\" (UniqueName: \"kubernetes.io/projected/1b5592be-8839-4660-a4c4-ab662fc975eb-kube-api-access-pmnts\") pod \"marketplace-operator-79b997595-6kg4f\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635745 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftpgf\" (UniqueName: \"kubernetes.io/projected/9fed3a51-8c05-46a7-8057-6839f70b2f22-kube-api-access-ftpgf\") pod \"machine-config-server-77jcb\" (UID: \"9fed3a51-8c05-46a7-8057-6839f70b2f22\") " pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635795 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/291724bc-0382-45d5-a089-356f8e04feb5-config\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635821 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b693a4b6-8aa6-489e-a797-fa486eab7443-tmpfs\") pod \"packageserver-d55dfcdfc-5v56r\" (UID: \"b693a4b6-8aa6-489e-a797-fa486eab7443\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635847 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zpjj\" (UniqueName: \"kubernetes.io/projected/cde7673b-c4b1-4060-86cd-cac7120de9bf-kube-api-access-9zpjj\") pod \"ingress-operator-5b745b69d9-b78vw\" (UID: \"cde7673b-c4b1-4060-86cd-cac7120de9bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635897 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-registry-tls\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635921 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635959 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s54b\" (UniqueName: \"kubernetes.io/projected/291724bc-0382-45d5-a089-356f8e04feb5-kube-api-access-8s54b\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635981 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9dc4ca7-8fe2-4479-989b-0cc98c651c96-serving-cert\") pod \"service-ca-operator-777779d784-hwrkh\" (UID: \"e9dc4ca7-8fe2-4479-989b-0cc98c651c96\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636002 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ddcb32c-fe4a-4f24-bc77-d6bc56562d75-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pnt99\" (UID: \"4ddcb32c-fe4a-4f24-bc77-d6bc56562d75\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636040 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9cddf065-d958-4bf4-b5a8-67321cba2f67-profile-collector-cert\") pod \"catalog-operator-68c6474976-65mrf\" (UID: \"9cddf065-d958-4bf4-b5a8-67321cba2f67\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636063 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-config\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636085 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e96179c-7517-40d5-918f-1fc379e16fec-serving-cert\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636123 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72kh9\" (UniqueName: \"kubernetes.io/projected/ec11c4de-b7ae-4b50-ab95-20be670ab6e8-kube-api-access-72kh9\") pod \"openshift-apiserver-operator-796bbdcf4f-fs75k\" (UID: \"ec11c4de-b7ae-4b50-ab95-20be670ab6e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636150 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636186 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2e96179c-7517-40d5-918f-1fc379e16fec-etcd-ca\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636207 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrlg8\" (UniqueName: \"kubernetes.io/projected/e9dc4ca7-8fe2-4479-989b-0cc98c651c96-kube-api-access-rrlg8\") pod \"service-ca-operator-777779d784-hwrkh\" (UID: \"e9dc4ca7-8fe2-4479-989b-0cc98c651c96\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636255 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51fcb019-af4d-4f3d-b1b0-4b4e6761db7c-serving-cert\") pod \"openshift-config-operator-7777fb866f-cp6s5\" (UID: \"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636275 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d0bd22-70a8-4496-9074-3251c15e5b59-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m76db\" (UID: \"98d0bd22-70a8-4496-9074-3251c15e5b59\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636314 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2n5v\" (UniqueName: \"kubernetes.io/projected/b693a4b6-8aa6-489e-a797-fa486eab7443-kube-api-access-l2n5v\") pod \"packageserver-d55dfcdfc-5v56r\" (UID: \"b693a4b6-8aa6-489e-a797-fa486eab7443\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636338 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636357 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2e96179c-7517-40d5-918f-1fc379e16fec-etcd-service-ca\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636395 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/594e9304-c63f-4d73-bcad-5258c1ebdd6d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636417 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4da6d2c9-755f-44e5-bab0-37cf60ee8378-config\") pod \"console-operator-58897d9998-ljpd5\" (UID: \"4da6d2c9-755f-44e5-bab0-37cf60ee8378\") " pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636442 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml6zh\" (UniqueName: \"kubernetes.io/projected/51fcb019-af4d-4f3d-b1b0-4b4e6761db7c-kube-api-access-ml6zh\") pod \"openshift-config-operator-7777fb866f-cp6s5\" (UID: \"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636483 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v69f4\" (UniqueName: \"kubernetes.io/projected/2e96179c-7517-40d5-918f-1fc379e16fec-kube-api-access-v69f4\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636504 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/effb39d8-ef30-45f3-bf93-b9dbb8de2475-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2nxxl\" (UID: \"effb39d8-ef30-45f3-bf93-b9dbb8de2475\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636547 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-images\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636575 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f59fb23-ca1e-487d-a345-9eada8d1c7a8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bd2tr\" (UID: \"8f59fb23-ca1e-487d-a345-9eada8d1c7a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636598 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/291724bc-0382-45d5-a089-356f8e04feb5-serving-cert\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636641 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97kl8\" (UniqueName: \"kubernetes.io/projected/df4fd08a-dcc8-4d5c-95ad-9a3542df3233-kube-api-access-97kl8\") pod \"olm-operator-6b444d44fb-sgfk5\" (UID: \"df4fd08a-dcc8-4d5c-95ad-9a3542df3233\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636669 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqs8s\" (UniqueName: \"kubernetes.io/projected/1b8cbffa-cf1a-4658-bd1b-7e7323449bf3-kube-api-access-jqs8s\") pod \"machine-config-operator-74547568cd-zwvcg\" (UID: \"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636712 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfwvg\" (UniqueName: \"kubernetes.io/projected/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-kube-api-access-kfwvg\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636740 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc6wt\" (UniqueName: \"kubernetes.io/projected/45194a2a-320c-439d-9070-2c534070b7e4-kube-api-access-dc6wt\") pod \"dns-operator-744455d44c-7ztl2\" (UID: \"45194a2a-320c-439d-9070-2c534070b7e4\") " pod="openshift-dns-operator/dns-operator-744455d44c-7ztl2" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636786 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/58ae0ba7-4454-4bec-87ac-432b346ee643-default-certificate\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636813 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-signing-key\") pod \"service-ca-9c57cc56f-c9t7q\" (UID: \"d882e1bb-7ece-45ea-9e5e-0d23f162f06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636837 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45194a2a-320c-439d-9070-2c534070b7e4-metrics-tls\") pod \"dns-operator-744455d44c-7ztl2\" (UID: \"45194a2a-320c-439d-9070-2c534070b7e4\") " pod="openshift-dns-operator/dns-operator-744455d44c-7ztl2" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636879 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/291724bc-0382-45d5-a089-356f8e04feb5-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636904 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/594e9304-c63f-4d73-bcad-5258c1ebdd6d-registry-certificates\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636945 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e12e505-3d35-4b3e-8015-9e2341d4791e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-68xdt\" (UID: \"6e12e505-3d35-4b3e-8015-9e2341d4791e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636968 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9cddf065-d958-4bf4-b5a8-67321cba2f67-srv-cert\") pod \"catalog-operator-68c6474976-65mrf\" (UID: \"9cddf065-d958-4bf4-b5a8-67321cba2f67\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636990 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e12e505-3d35-4b3e-8015-9e2341d4791e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-68xdt\" (UID: \"6e12e505-3d35-4b3e-8015-9e2341d4791e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637033 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwxm6\" (UniqueName: \"kubernetes.io/projected/4ddcb32c-fe4a-4f24-bc77-d6bc56562d75-kube-api-access-bwxm6\") pod \"package-server-manager-789f6589d5-pnt99\" (UID: \"4ddcb32c-fe4a-4f24-bc77-d6bc56562d75\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637064 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637104 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcflf\" (UniqueName: \"kubernetes.io/projected/433ae711-459e-4627-83c1-0fecfe929c60-kube-api-access-jcflf\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637131 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9dc4ca7-8fe2-4479-989b-0cc98c651c96-config\") pod \"service-ca-operator-777779d784-hwrkh\" (UID: \"e9dc4ca7-8fe2-4479-989b-0cc98c651c96\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637181 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98d0bd22-70a8-4496-9074-3251c15e5b59-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m76db\" (UID: \"98d0bd22-70a8-4496-9074-3251c15e5b59\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637207 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9fed3a51-8c05-46a7-8057-6839f70b2f22-node-bootstrap-token\") pod \"machine-config-server-77jcb\" (UID: \"9fed3a51-8c05-46a7-8057-6839f70b2f22\") " pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637275 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-service-ca\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637298 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/433ae711-459e-4627-83c1-0fecfe929c60-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637354 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f2ac3f6-ed20-4205-9dfd-ce6d76269c26-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bh4wr\" (UID: \"2f2ac3f6-ed20-4205-9dfd-ce6d76269c26\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637369 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637381 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zhrgt\" (UID: \"f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637419 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9lvbs"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637445 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637499 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f59fb23-ca1e-487d-a345-9eada8d1c7a8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bd2tr\" (UID: \"8f59fb23-ca1e-487d-a345-9eada8d1c7a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637532 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638043 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-oauth-config\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638072 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqkpg\" (UniqueName: \"kubernetes.io/projected/ad56317f-8d37-4d59-9abe-346b4340a30c-kube-api-access-lqkpg\") pod \"cluster-samples-operator-665b6dd947-8qfbt\" (UID: \"ad56317f-8d37-4d59-9abe-346b4340a30c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638095 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf8k7\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-kube-api-access-mf8k7\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638113 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/433ae711-459e-4627-83c1-0fecfe929c60-etcd-client\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638130 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7xxg\" (UniqueName: \"kubernetes.io/projected/6e12e505-3d35-4b3e-8015-9e2341d4791e-kube-api-access-j7xxg\") pod \"kube-storage-version-migrator-operator-b67b599dd-68xdt\" (UID: \"6e12e505-3d35-4b3e-8015-9e2341d4791e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638148 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ad56317f-8d37-4d59-9abe-346b4340a30c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8qfbt\" (UID: \"ad56317f-8d37-4d59-9abe-346b4340a30c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638166 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/effb39d8-ef30-45f3-bf93-b9dbb8de2475-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2nxxl\" (UID: \"effb39d8-ef30-45f3-bf93-b9dbb8de2475\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638182 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zhrgt\" (UID: \"f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638200 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/594e9304-c63f-4d73-bcad-5258c1ebdd6d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638228 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/433ae711-459e-4627-83c1-0fecfe929c60-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638248 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5a475011-4dc0-4490-829a-8016f3b0e8a2-audit-dir\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638264 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2e96179c-7517-40d5-918f-1fc379e16fec-etcd-client\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638299 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cde7673b-c4b1-4060-86cd-cac7120de9bf-trusted-ca\") pod \"ingress-operator-5b745b69d9-b78vw\" (UID: \"cde7673b-c4b1-4060-86cd-cac7120de9bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638357 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bh9q\" (UniqueName: \"kubernetes.io/projected/0c3f3f4e-122f-40b8-a3f1-d868a36640a1-kube-api-access-4bh9q\") pod \"migrator-59844c95c7-j4pcf\" (UID: \"0c3f3f4e-122f-40b8-a3f1-d868a36640a1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4pcf" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638397 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638421 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c07afc79-e943-4e79-93ed-8eedd0ade1bc-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-x7hq6\" (UID: \"c07afc79-e943-4e79-93ed-8eedd0ade1bc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-x7hq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638455 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58ae0ba7-4454-4bec-87ac-432b346ee643-metrics-certs\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638487 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638533 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cde7673b-c4b1-4060-86cd-cac7120de9bf-metrics-tls\") pod \"ingress-operator-5b745b69d9-b78vw\" (UID: \"cde7673b-c4b1-4060-86cd-cac7120de9bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.639513 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/291724bc-0382-45d5-a089-356f8e04feb5-config\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.640025 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-audit-policies\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.640183 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/594e9304-c63f-4d73-bcad-5258c1ebdd6d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.640277 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5a475011-4dc0-4490-829a-8016f3b0e8a2-audit-dir\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.640420 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e96179c-7517-40d5-918f-1fc379e16fec-config\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.640645 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f59fb23-ca1e-487d-a345-9eada8d1c7a8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bd2tr\" (UID: \"8f59fb23-ca1e-487d-a345-9eada8d1c7a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.641038 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-service-ca\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.641193 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2e96179c-7517-40d5-918f-1fc379e16fec-etcd-service-ca\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.641969 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.643310 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.643632 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.643890 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.643902 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/291724bc-0382-45d5-a089-356f8e04feb5-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.644134 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.644604 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45194a2a-320c-439d-9070-2c534070b7e4-metrics-tls\") pod \"dns-operator-744455d44c-7ztl2\" (UID: \"45194a2a-320c-439d-9070-2c534070b7e4\") " pod="openshift-dns-operator/dns-operator-744455d44c-7ztl2" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.645389 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-config\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.645453 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.645681 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e96179c-7517-40d5-918f-1fc379e16fec-serving-cert\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.645890 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-trusted-ca-bundle\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.645888 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2e96179c-7517-40d5-918f-1fc379e16fec-etcd-ca\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.645929 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/594e9304-c63f-4d73-bcad-5258c1ebdd6d-registry-certificates\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.646094 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f59fb23-ca1e-487d-a345-9eada8d1c7a8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bd2tr\" (UID: \"8f59fb23-ca1e-487d-a345-9eada8d1c7a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.646096 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.646438 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2e96179c-7517-40d5-918f-1fc379e16fec-etcd-client\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.646530 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/594e9304-c63f-4d73-bcad-5258c1ebdd6d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.647713 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/291724bc-0382-45d5-a089-356f8e04feb5-serving-cert\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.648252 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-oauth-config\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.648707 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.649266 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.650201 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.651151 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ad56317f-8d37-4d59-9abe-346b4340a30c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8qfbt\" (UID: \"ad56317f-8d37-4d59-9abe-346b4340a30c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.651304 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-j4pcf"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.651644 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.653125 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.653345 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-x7hq6"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.653422 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-registry-tls\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.654972 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.656866 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-f9lhg"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.658365 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.658404 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-vxx8p"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.658473 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.658884 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vxx8p" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.659055 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.660924 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.663200 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.663680 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.665228 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6kg4f"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.666270 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-f9lhg"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.671126 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vxx8p"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.672141 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.672457 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6t4bv"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.675332 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.678621 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-m4jjq"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.679678 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.679805 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-m4jjq"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.692615 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.697763 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.711571 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.731776 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.739679 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/433ae711-459e-4627-83c1-0fecfe929c60-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.739729 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cde7673b-c4b1-4060-86cd-cac7120de9bf-trusted-ca\") pod \"ingress-operator-5b745b69d9-b78vw\" (UID: \"cde7673b-c4b1-4060-86cd-cac7120de9bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.739754 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bh9q\" (UniqueName: \"kubernetes.io/projected/0c3f3f4e-122f-40b8-a3f1-d868a36640a1-kube-api-access-4bh9q\") pod \"migrator-59844c95c7-j4pcf\" (UID: \"0c3f3f4e-122f-40b8-a3f1-d868a36640a1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4pcf" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.739777 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c07afc79-e943-4e79-93ed-8eedd0ade1bc-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-x7hq6\" (UID: \"c07afc79-e943-4e79-93ed-8eedd0ade1bc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-x7hq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.739827 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58ae0ba7-4454-4bec-87ac-432b346ee643-metrics-certs\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.739850 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cde7673b-c4b1-4060-86cd-cac7120de9bf-metrics-tls\") pod \"ingress-operator-5b745b69d9-b78vw\" (UID: \"cde7673b-c4b1-4060-86cd-cac7120de9bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.739866 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.739872 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec11c4de-b7ae-4b50-ab95-20be670ab6e8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fs75k\" (UID: \"ec11c4de-b7ae-4b50-ab95-20be670ab6e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740291 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b693a4b6-8aa6-489e-a797-fa486eab7443-apiservice-cert\") pod \"packageserver-d55dfcdfc-5v56r\" (UID: \"b693a4b6-8aa6-489e-a797-fa486eab7443\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740320 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b8cbffa-cf1a-4658-bd1b-7e7323449bf3-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zwvcg\" (UID: \"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740351 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740369 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/433ae711-459e-4627-83c1-0fecfe929c60-audit-dir\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740387 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6kg4f\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740420 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cde7673b-c4b1-4060-86cd-cac7120de9bf-bound-sa-token\") pod \"ingress-operator-5b745b69d9-b78vw\" (UID: \"cde7673b-c4b1-4060-86cd-cac7120de9bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740436 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/df4fd08a-dcc8-4d5c-95ad-9a3542df3233-srv-cert\") pod \"olm-operator-6b444d44fb-sgfk5\" (UID: \"df4fd08a-dcc8-4d5c-95ad-9a3542df3233\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740454 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d-config\") pod \"kube-apiserver-operator-766d6c64bb-zhrgt\" (UID: \"f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740477 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/df4fd08a-dcc8-4d5c-95ad-9a3542df3233-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sgfk5\" (UID: \"df4fd08a-dcc8-4d5c-95ad-9a3542df3233\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740493 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/effb39d8-ef30-45f3-bf93-b9dbb8de2475-config\") pod \"kube-controller-manager-operator-78b949d7b-2nxxl\" (UID: \"effb39d8-ef30-45f3-bf93-b9dbb8de2475\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740511 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d5tz\" (UniqueName: \"kubernetes.io/projected/d8101cd0-5430-4786-bf8a-3d9c60ad1f7d-kube-api-access-5d5tz\") pod \"downloads-7954f5f757-jvtp4\" (UID: \"d8101cd0-5430-4786-bf8a-3d9c60ad1f7d\") " pod="openshift-console/downloads-7954f5f757-jvtp4" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740534 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1b8cbffa-cf1a-4658-bd1b-7e7323449bf3-images\") pod \"machine-config-operator-74547568cd-zwvcg\" (UID: \"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740562 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b075f5c7-f95f-4883-8d94-d1b64bc3c451-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxlln\" (UID: \"b075f5c7-f95f-4883-8d94-d1b64bc3c451\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740578 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdntk\" (UniqueName: \"kubernetes.io/projected/4da6d2c9-755f-44e5-bab0-37cf60ee8378-kube-api-access-gdntk\") pod \"console-operator-58897d9998-ljpd5\" (UID: \"4da6d2c9-755f-44e5-bab0-37cf60ee8378\") " pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740601 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c6x9\" (UniqueName: \"kubernetes.io/projected/ba766e4c-056f-4be6-a4b9-05592b641f87-kube-api-access-8c6x9\") pod \"control-plane-machine-set-operator-78cbb6b69f-xcpwg\" (UID: \"ba766e4c-056f-4be6-a4b9-05592b641f87\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740634 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9fed3a51-8c05-46a7-8057-6839f70b2f22-certs\") pod \"machine-config-server-77jcb\" (UID: \"9fed3a51-8c05-46a7-8057-6839f70b2f22\") " pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740655 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/51fcb019-af4d-4f3d-b1b0-4b4e6761db7c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cp6s5\" (UID: \"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740673 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b075f5c7-f95f-4883-8d94-d1b64bc3c451-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxlln\" (UID: \"b075f5c7-f95f-4883-8d94-d1b64bc3c451\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740691 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-signing-cabundle\") pod \"service-ca-9c57cc56f-c9t7q\" (UID: \"d882e1bb-7ece-45ea-9e5e-0d23f162f06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740736 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ba766e4c-056f-4be6-a4b9-05592b641f87-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xcpwg\" (UID: \"ba766e4c-056f-4be6-a4b9-05592b641f87\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740764 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8lhm\" (UniqueName: \"kubernetes.io/projected/c07afc79-e943-4e79-93ed-8eedd0ade1bc-kube-api-access-q8lhm\") pod \"multus-admission-controller-857f4d67dd-x7hq6\" (UID: \"c07afc79-e943-4e79-93ed-8eedd0ade1bc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-x7hq6" Feb 03 10:04:39 crc kubenswrapper[5010]: E0203 10:04:39.740867 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.240853273 +0000 UTC m=+150.396829402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740892 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/433ae711-459e-4627-83c1-0fecfe929c60-audit-dir\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.741030 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b8cbffa-cf1a-4658-bd1b-7e7323449bf3-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zwvcg\" (UID: \"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.741491 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/51fcb019-af4d-4f3d-b1b0-4b4e6761db7c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cp6s5\" (UID: \"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.741718 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gskkj\" (UniqueName: \"kubernetes.io/projected/2f2ac3f6-ed20-4205-9dfd-ce6d76269c26-kube-api-access-gskkj\") pod \"machine-config-controller-84d6567774-bh4wr\" (UID: \"2f2ac3f6-ed20-4205-9dfd-ce6d76269c26\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.741746 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfsz9\" (UniqueName: \"kubernetes.io/projected/9b9c4aab-790c-4581-bfc2-ad1d7302c704-kube-api-access-qfsz9\") pod \"collect-profiles-29501880-x6pjp\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.741939 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58ae0ba7-4454-4bec-87ac-432b346ee643-service-ca-bundle\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.742042 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec11c4de-b7ae-4b50-ab95-20be670ab6e8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fs75k\" (UID: \"ec11c4de-b7ae-4b50-ab95-20be670ab6e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.742133 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/433ae711-459e-4627-83c1-0fecfe929c60-serving-cert\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.742766 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f2ac3f6-ed20-4205-9dfd-ce6d76269c26-proxy-tls\") pod \"machine-config-controller-84d6567774-bh4wr\" (UID: \"2f2ac3f6-ed20-4205-9dfd-ce6d76269c26\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.742945 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1b8cbffa-cf1a-4658-bd1b-7e7323449bf3-proxy-tls\") pod \"machine-config-operator-74547568cd-zwvcg\" (UID: \"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.743075 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxl5b\" (UniqueName: \"kubernetes.io/projected/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-kube-api-access-nxl5b\") pod \"service-ca-9c57cc56f-c9t7q\" (UID: \"d882e1bb-7ece-45ea-9e5e-0d23f162f06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.743184 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77bnx\" (UniqueName: \"kubernetes.io/projected/98d0bd22-70a8-4496-9074-3251c15e5b59-kube-api-access-77bnx\") pod \"openshift-controller-manager-operator-756b6f6bc6-m76db\" (UID: \"98d0bd22-70a8-4496-9074-3251c15e5b59\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.743019 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec11c4de-b7ae-4b50-ab95-20be670ab6e8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fs75k\" (UID: \"ec11c4de-b7ae-4b50-ab95-20be670ab6e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.742157 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d-config\") pod \"kube-apiserver-operator-766d6c64bb-zhrgt\" (UID: \"f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.741946 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b075f5c7-f95f-4883-8d94-d1b64bc3c451-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxlln\" (UID: \"b075f5c7-f95f-4883-8d94-d1b64bc3c451\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.743298 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/58ae0ba7-4454-4bec-87ac-432b346ee643-stats-auth\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.743597 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec11c4de-b7ae-4b50-ab95-20be670ab6e8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fs75k\" (UID: \"ec11c4de-b7ae-4b50-ab95-20be670ab6e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.743746 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv6sx\" (UniqueName: \"kubernetes.io/projected/9cddf065-d958-4bf4-b5a8-67321cba2f67-kube-api-access-tv6sx\") pod \"catalog-operator-68c6474976-65mrf\" (UID: \"9cddf065-d958-4bf4-b5a8-67321cba2f67\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.743851 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6kg4f\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.743958 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b693a4b6-8aa6-489e-a797-fa486eab7443-webhook-cert\") pod \"packageserver-d55dfcdfc-5v56r\" (UID: \"b693a4b6-8aa6-489e-a797-fa486eab7443\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.744057 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b075f5c7-f95f-4883-8d94-d1b64bc3c451-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxlln\" (UID: \"b075f5c7-f95f-4883-8d94-d1b64bc3c451\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.744160 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/433ae711-459e-4627-83c1-0fecfe929c60-encryption-config\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.744282 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4da6d2c9-755f-44e5-bab0-37cf60ee8378-trusted-ca\") pod \"console-operator-58897d9998-ljpd5\" (UID: \"4da6d2c9-755f-44e5-bab0-37cf60ee8378\") " pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.745063 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4da6d2c9-755f-44e5-bab0-37cf60ee8378-serving-cert\") pod \"console-operator-58897d9998-ljpd5\" (UID: \"4da6d2c9-755f-44e5-bab0-37cf60ee8378\") " pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.745192 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/433ae711-459e-4627-83c1-0fecfe929c60-audit-policies\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.745288 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b9c4aab-790c-4581-bfc2-ad1d7302c704-secret-volume\") pod \"collect-profiles-29501880-x6pjp\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.745383 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdssv\" (UniqueName: \"kubernetes.io/projected/58ae0ba7-4454-4bec-87ac-432b346ee643-kube-api-access-pdssv\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.745454 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b9c4aab-790c-4581-bfc2-ad1d7302c704-config-volume\") pod \"collect-profiles-29501880-x6pjp\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.745545 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmnts\" (UniqueName: \"kubernetes.io/projected/1b5592be-8839-4660-a4c4-ab662fc975eb-kube-api-access-pmnts\") pod \"marketplace-operator-79b997595-6kg4f\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.745617 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftpgf\" (UniqueName: \"kubernetes.io/projected/9fed3a51-8c05-46a7-8057-6839f70b2f22-kube-api-access-ftpgf\") pod \"machine-config-server-77jcb\" (UID: \"9fed3a51-8c05-46a7-8057-6839f70b2f22\") " pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.745690 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b693a4b6-8aa6-489e-a797-fa486eab7443-tmpfs\") pod \"packageserver-d55dfcdfc-5v56r\" (UID: \"b693a4b6-8aa6-489e-a797-fa486eab7443\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.745767 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zpjj\" (UniqueName: \"kubernetes.io/projected/cde7673b-c4b1-4060-86cd-cac7120de9bf-kube-api-access-9zpjj\") pod \"ingress-operator-5b745b69d9-b78vw\" (UID: \"cde7673b-c4b1-4060-86cd-cac7120de9bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.745851 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9cddf065-d958-4bf4-b5a8-67321cba2f67-profile-collector-cert\") pod \"catalog-operator-68c6474976-65mrf\" (UID: \"9cddf065-d958-4bf4-b5a8-67321cba2f67\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.745926 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9dc4ca7-8fe2-4479-989b-0cc98c651c96-serving-cert\") pod \"service-ca-operator-777779d784-hwrkh\" (UID: \"e9dc4ca7-8fe2-4479-989b-0cc98c651c96\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.745994 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ddcb32c-fe4a-4f24-bc77-d6bc56562d75-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pnt99\" (UID: \"4ddcb32c-fe4a-4f24-bc77-d6bc56562d75\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.746061 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72kh9\" (UniqueName: \"kubernetes.io/projected/ec11c4de-b7ae-4b50-ab95-20be670ab6e8-kube-api-access-72kh9\") pod \"openshift-apiserver-operator-796bbdcf4f-fs75k\" (UID: \"ec11c4de-b7ae-4b50-ab95-20be670ab6e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.746152 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrlg8\" (UniqueName: \"kubernetes.io/projected/e9dc4ca7-8fe2-4479-989b-0cc98c651c96-kube-api-access-rrlg8\") pod \"service-ca-operator-777779d784-hwrkh\" (UID: \"e9dc4ca7-8fe2-4479-989b-0cc98c651c96\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.746246 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51fcb019-af4d-4f3d-b1b0-4b4e6761db7c-serving-cert\") pod \"openshift-config-operator-7777fb866f-cp6s5\" (UID: \"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.746317 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d0bd22-70a8-4496-9074-3251c15e5b59-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m76db\" (UID: \"98d0bd22-70a8-4496-9074-3251c15e5b59\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.746247 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b693a4b6-8aa6-489e-a797-fa486eab7443-tmpfs\") pod \"packageserver-d55dfcdfc-5v56r\" (UID: \"b693a4b6-8aa6-489e-a797-fa486eab7443\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.746463 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2n5v\" (UniqueName: \"kubernetes.io/projected/b693a4b6-8aa6-489e-a797-fa486eab7443-kube-api-access-l2n5v\") pod \"packageserver-d55dfcdfc-5v56r\" (UID: \"b693a4b6-8aa6-489e-a797-fa486eab7443\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.746536 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4da6d2c9-755f-44e5-bab0-37cf60ee8378-config\") pod \"console-operator-58897d9998-ljpd5\" (UID: \"4da6d2c9-755f-44e5-bab0-37cf60ee8378\") " pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.746607 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml6zh\" (UniqueName: \"kubernetes.io/projected/51fcb019-af4d-4f3d-b1b0-4b4e6761db7c-kube-api-access-ml6zh\") pod \"openshift-config-operator-7777fb866f-cp6s5\" (UID: \"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.746713 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/effb39d8-ef30-45f3-bf93-b9dbb8de2475-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2nxxl\" (UID: \"effb39d8-ef30-45f3-bf93-b9dbb8de2475\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.746823 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97kl8\" (UniqueName: \"kubernetes.io/projected/df4fd08a-dcc8-4d5c-95ad-9a3542df3233-kube-api-access-97kl8\") pod \"olm-operator-6b444d44fb-sgfk5\" (UID: \"df4fd08a-dcc8-4d5c-95ad-9a3542df3233\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.746920 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqs8s\" (UniqueName: \"kubernetes.io/projected/1b8cbffa-cf1a-4658-bd1b-7e7323449bf3-kube-api-access-jqs8s\") pod \"machine-config-operator-74547568cd-zwvcg\" (UID: \"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747241 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-signing-key\") pod \"service-ca-9c57cc56f-c9t7q\" (UID: \"d882e1bb-7ece-45ea-9e5e-0d23f162f06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747340 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/58ae0ba7-4454-4bec-87ac-432b346ee643-default-certificate\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747434 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e12e505-3d35-4b3e-8015-9e2341d4791e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-68xdt\" (UID: \"6e12e505-3d35-4b3e-8015-9e2341d4791e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747536 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9cddf065-d958-4bf4-b5a8-67321cba2f67-srv-cert\") pod \"catalog-operator-68c6474976-65mrf\" (UID: \"9cddf065-d958-4bf4-b5a8-67321cba2f67\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747459 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f2ac3f6-ed20-4205-9dfd-ce6d76269c26-proxy-tls\") pod \"machine-config-controller-84d6567774-bh4wr\" (UID: \"2f2ac3f6-ed20-4205-9dfd-ce6d76269c26\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747638 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e12e505-3d35-4b3e-8015-9e2341d4791e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-68xdt\" (UID: \"6e12e505-3d35-4b3e-8015-9e2341d4791e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747714 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwxm6\" (UniqueName: \"kubernetes.io/projected/4ddcb32c-fe4a-4f24-bc77-d6bc56562d75-kube-api-access-bwxm6\") pod \"package-server-manager-789f6589d5-pnt99\" (UID: \"4ddcb32c-fe4a-4f24-bc77-d6bc56562d75\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747743 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98d0bd22-70a8-4496-9074-3251c15e5b59-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m76db\" (UID: \"98d0bd22-70a8-4496-9074-3251c15e5b59\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747770 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9fed3a51-8c05-46a7-8057-6839f70b2f22-node-bootstrap-token\") pod \"machine-config-server-77jcb\" (UID: \"9fed3a51-8c05-46a7-8057-6839f70b2f22\") " pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747798 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/433ae711-459e-4627-83c1-0fecfe929c60-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747815 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcflf\" (UniqueName: \"kubernetes.io/projected/433ae711-459e-4627-83c1-0fecfe929c60-kube-api-access-jcflf\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747836 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9dc4ca7-8fe2-4479-989b-0cc98c651c96-config\") pod \"service-ca-operator-777779d784-hwrkh\" (UID: \"e9dc4ca7-8fe2-4479-989b-0cc98c651c96\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747866 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f2ac3f6-ed20-4205-9dfd-ce6d76269c26-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bh4wr\" (UID: \"2f2ac3f6-ed20-4205-9dfd-ce6d76269c26\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747885 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zhrgt\" (UID: \"f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747950 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/433ae711-459e-4627-83c1-0fecfe929c60-etcd-client\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747969 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7xxg\" (UniqueName: \"kubernetes.io/projected/6e12e505-3d35-4b3e-8015-9e2341d4791e-kube-api-access-j7xxg\") pod \"kube-storage-version-migrator-operator-b67b599dd-68xdt\" (UID: \"6e12e505-3d35-4b3e-8015-9e2341d4791e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747992 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zhrgt\" (UID: \"f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.748018 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/effb39d8-ef30-45f3-bf93-b9dbb8de2475-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2nxxl\" (UID: \"effb39d8-ef30-45f3-bf93-b9dbb8de2475\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.748594 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b075f5c7-f95f-4883-8d94-d1b64bc3c451-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxlln\" (UID: \"b075f5c7-f95f-4883-8d94-d1b64bc3c451\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.748856 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f2ac3f6-ed20-4205-9dfd-ce6d76269c26-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bh4wr\" (UID: \"2f2ac3f6-ed20-4205-9dfd-ce6d76269c26\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.749135 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e12e505-3d35-4b3e-8015-9e2341d4791e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-68xdt\" (UID: \"6e12e505-3d35-4b3e-8015-9e2341d4791e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.750548 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e12e505-3d35-4b3e-8015-9e2341d4791e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-68xdt\" (UID: \"6e12e505-3d35-4b3e-8015-9e2341d4791e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.751724 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zhrgt\" (UID: \"f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.751850 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.773033 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.787584 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cde7673b-c4b1-4060-86cd-cac7120de9bf-metrics-tls\") pod \"ingress-operator-5b745b69d9-b78vw\" (UID: \"cde7673b-c4b1-4060-86cd-cac7120de9bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.797149 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.801142 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cde7673b-c4b1-4060-86cd-cac7120de9bf-trusted-ca\") pod \"ingress-operator-5b745b69d9-b78vw\" (UID: \"cde7673b-c4b1-4060-86cd-cac7120de9bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.811808 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.831704 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.848596 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:39 crc kubenswrapper[5010]: E0203 10:04:39.848714 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.348692939 +0000 UTC m=+150.504669068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.848966 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: E0203 10:04:39.849671 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.349661776 +0000 UTC m=+150.505637965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.851771 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.861530 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1b8cbffa-cf1a-4658-bd1b-7e7323449bf3-images\") pod \"machine-config-operator-74547568cd-zwvcg\" (UID: \"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.871645 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.871800 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9lvbs"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.877813 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1b8cbffa-cf1a-4658-bd1b-7e7323449bf3-proxy-tls\") pod \"machine-config-operator-74547568cd-zwvcg\" (UID: \"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:39 crc kubenswrapper[5010]: W0203 10:04:39.879541 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf586c8c_c859_44a2_9b28_16708745cda1.slice/crio-dc0e215632636070f9233c8da5cf61ed4ccec496761b77a3b527af638caff757 WatchSource:0}: Error finding container dc0e215632636070f9233c8da5cf61ed4ccec496761b77a3b527af638caff757: Status 404 returned error can't find the container with id dc0e215632636070f9233c8da5cf61ed4ccec496761b77a3b527af638caff757 Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.893896 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.894427 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6"] Feb 03 10:04:39 crc kubenswrapper[5010]: W0203 10:04:39.904436 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61153282_2bd6_4bbf_a04a_76909b13f961.slice/crio-de6014a42b56ede90300ddd6921cb59d6826d8880dbadae1fda87913014c2ca8 WatchSource:0}: Error finding container de6014a42b56ede90300ddd6921cb59d6826d8880dbadae1fda87913014c2ca8: Status 404 returned error can't find the container with id de6014a42b56ede90300ddd6921cb59d6826d8880dbadae1fda87913014c2ca8 Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.928609 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsc2k\" (UniqueName: \"kubernetes.io/projected/23cdf53e-881f-4cf2-b557-e087a017b7ec-kube-api-access-nsc2k\") pod \"machine-approver-56656f9798-sk5mk\" (UID: \"23cdf53e-881f-4cf2-b557-e087a017b7ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.947744 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzx2n\" (UniqueName: \"kubernetes.io/projected/e27ae235-3c1c-4ee0-85b6-a53477e335e5-kube-api-access-lzx2n\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.950121 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:39 crc kubenswrapper[5010]: E0203 10:04:39.950287 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.450264856 +0000 UTC m=+150.606240985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.950379 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: E0203 10:04:39.950897 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.450885594 +0000 UTC m=+150.606861723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.951858 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.964635 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ba766e4c-056f-4be6-a4b9-05592b641f87-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xcpwg\" (UID: \"ba766e4c-056f-4be6-a4b9-05592b641f87\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.971400 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.985286 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.992257 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.998908 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4da6d2c9-755f-44e5-bab0-37cf60ee8378-config\") pod \"console-operator-58897d9998-ljpd5\" (UID: \"4da6d2c9-755f-44e5-bab0-37cf60ee8378\") " pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.010193 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.022733 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.025426 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4da6d2c9-755f-44e5-bab0-37cf60ee8378-trusted-ca\") pod \"console-operator-58897d9998-ljpd5\" (UID: \"4da6d2c9-755f-44e5-bab0-37cf60ee8378\") " pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.032205 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.051774 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.052510 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.052598 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.552581705 +0000 UTC m=+150.708557834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.062135 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4da6d2c9-755f-44e5-bab0-37cf60ee8378-serving-cert\") pod \"console-operator-58897d9998-ljpd5\" (UID: \"4da6d2c9-755f-44e5-bab0-37cf60ee8378\") " pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.072514 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.092682 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.112868 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.132583 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.144280 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c07afc79-e943-4e79-93ed-8eedd0ade1bc-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-x7hq6\" (UID: \"c07afc79-e943-4e79-93ed-8eedd0ade1bc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-x7hq6" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.153478 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.153816 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.653798843 +0000 UTC m=+150.809774972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.154026 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lc7dd"] Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.154110 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.171472 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.179626 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ddcb32c-fe4a-4f24-bc77-d6bc56562d75-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pnt99\" (UID: \"4ddcb32c-fe4a-4f24-bc77-d6bc56562d75\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.192078 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: W0203 10:04:40.200077 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode27ae235_3c1c_4ee0_85b6_a53477e335e5.slice/crio-8b56ac9ef9b68e183b29025350e04525ecb7ee2dc150d387fdfd29f29126ba81 WatchSource:0}: Error finding container 8b56ac9ef9b68e183b29025350e04525ecb7ee2dc150d387fdfd29f29126ba81: Status 404 returned error can't find the container with id 8b56ac9ef9b68e183b29025350e04525ecb7ee2dc150d387fdfd29f29126ba81 Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.212370 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.231584 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.252899 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.254882 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.255299 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.755274448 +0000 UTC m=+150.911250577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.272889 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.280906 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/58ae0ba7-4454-4bec-87ac-432b346ee643-default-certificate\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.292429 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.297311 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/58ae0ba7-4454-4bec-87ac-432b346ee643-stats-auth\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.312158 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.319166 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" event={"ID":"61153282-2bd6-4bbf-a04a-76909b13f961","Type":"ContainerStarted","Data":"815c9a092d4240f3fb7d7c856a7d1fe04289a8f354f5c335fb93d5de0abf1f2c"} Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.319229 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" event={"ID":"61153282-2bd6-4bbf-a04a-76909b13f961","Type":"ContainerStarted","Data":"de6014a42b56ede90300ddd6921cb59d6826d8880dbadae1fda87913014c2ca8"} Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.319377 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.320469 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" event={"ID":"23cdf53e-881f-4cf2-b557-e087a017b7ec","Type":"ContainerStarted","Data":"de63740c8bff7cdcb85cb9e685ecdbe9ab444131ef57e443aaa8fea303a4459d"} Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.320513 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" event={"ID":"23cdf53e-881f-4cf2-b557-e087a017b7ec","Type":"ContainerStarted","Data":"e73bad45656b96d3815aa3ce12b06891b4a27b4089969094ff27b1f088236ebd"} Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.321114 5010 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-qgmq6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.321180 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" podUID="61153282-2bd6-4bbf-a04a-76909b13f961" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.323510 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" event={"ID":"cf586c8c-c859-44a2-9b28-16708745cda1","Type":"ContainerDied","Data":"d8e170ae0df330deb0c6596bc5973cb373d32b7634e54c39e7cb19723d18b5aa"} Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.323854 5010 generic.go:334] "Generic (PLEG): container finished" podID="cf586c8c-c859-44a2-9b28-16708745cda1" containerID="d8e170ae0df330deb0c6596bc5973cb373d32b7634e54c39e7cb19723d18b5aa" exitCode=0 Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.323943 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" event={"ID":"cf586c8c-c859-44a2-9b28-16708745cda1","Type":"ContainerStarted","Data":"dc0e215632636070f9233c8da5cf61ed4ccec496761b77a3b527af638caff757"} Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.324543 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58ae0ba7-4454-4bec-87ac-432b346ee643-metrics-certs\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.325666 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" event={"ID":"e27ae235-3c1c-4ee0-85b6-a53477e335e5","Type":"ContainerStarted","Data":"9193e654b0aae87a0f6cb66b87865bff8d5a0d8845927c6e2ff446174e9141b4"} Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.325808 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" event={"ID":"e27ae235-3c1c-4ee0-85b6-a53477e335e5","Type":"ContainerStarted","Data":"8b56ac9ef9b68e183b29025350e04525ecb7ee2dc150d387fdfd29f29126ba81"} Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.325910 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.328060 5010 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lc7dd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.328102 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" podUID="e27ae235-3c1c-4ee0-85b6-a53477e335e5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.331230 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.333436 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58ae0ba7-4454-4bec-87ac-432b346ee643-service-ca-bundle\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.351772 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.356673 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.357797 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.857785503 +0000 UTC m=+151.013761632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.372254 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.387337 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/df4fd08a-dcc8-4d5c-95ad-9a3542df3233-srv-cert\") pod \"olm-operator-6b444d44fb-sgfk5\" (UID: \"df4fd08a-dcc8-4d5c-95ad-9a3542df3233\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.392766 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.401764 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b9c4aab-790c-4581-bfc2-ad1d7302c704-secret-volume\") pod \"collect-profiles-29501880-x6pjp\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.402311 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9cddf065-d958-4bf4-b5a8-67321cba2f67-profile-collector-cert\") pod \"catalog-operator-68c6474976-65mrf\" (UID: \"9cddf065-d958-4bf4-b5a8-67321cba2f67\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.405791 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/df4fd08a-dcc8-4d5c-95ad-9a3542df3233-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sgfk5\" (UID: \"df4fd08a-dcc8-4d5c-95ad-9a3542df3233\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.411979 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.416786 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b693a4b6-8aa6-489e-a797-fa486eab7443-webhook-cert\") pod \"packageserver-d55dfcdfc-5v56r\" (UID: \"b693a4b6-8aa6-489e-a797-fa486eab7443\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.423654 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b693a4b6-8aa6-489e-a797-fa486eab7443-apiservice-cert\") pod \"packageserver-d55dfcdfc-5v56r\" (UID: \"b693a4b6-8aa6-489e-a797-fa486eab7443\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.432124 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.451561 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.457773 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.457885 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.957863017 +0000 UTC m=+151.113839146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.457992 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.458269 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.958261588 +0000 UTC m=+151.114237717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.472420 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.479728 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9dc4ca7-8fe2-4479-989b-0cc98c651c96-serving-cert\") pod \"service-ca-operator-777779d784-hwrkh\" (UID: \"e9dc4ca7-8fe2-4479-989b-0cc98c651c96\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.492748 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.499485 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9dc4ca7-8fe2-4479-989b-0cc98c651c96-config\") pod \"service-ca-operator-777779d784-hwrkh\" (UID: \"e9dc4ca7-8fe2-4479-989b-0cc98c651c96\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.511705 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.532548 5010 request.go:700] Waited for 1.000764512s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&limit=500&resourceVersion=0 Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.535396 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.541345 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9cddf065-d958-4bf4-b5a8-67321cba2f67-srv-cert\") pod \"catalog-operator-68c6474976-65mrf\" (UID: \"9cddf065-d958-4bf4-b5a8-67321cba2f67\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.553702 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.558948 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.559120 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.059096345 +0000 UTC m=+151.215072484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.559571 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.559952 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.059941569 +0000 UTC m=+151.215917698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.571232 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.591912 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.611581 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.621582 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/433ae711-459e-4627-83c1-0fecfe929c60-etcd-client\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.631581 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.640621 5010 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.640723 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-images podName:dc73dc6e-53ff-48b8-932e-d5aeb839f2dd nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.140700165 +0000 UTC m=+151.296676294 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-images") pod "machine-api-operator-5694c8668f-5mq4r" (UID: "dc73dc6e-53ff-48b8-932e-d5aeb839f2dd") : failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.641104 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/433ae711-459e-4627-83c1-0fecfe929c60-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.651980 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.655416 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/433ae711-459e-4627-83c1-0fecfe929c60-serving-cert\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.660929 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.662479 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.161835196 +0000 UTC m=+151.317811315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.672313 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.676761 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/433ae711-459e-4627-83c1-0fecfe929c60-encryption-config\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.692169 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.711846 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.731829 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.735953 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/433ae711-459e-4627-83c1-0fecfe929c60-audit-policies\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.741000 5010 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.741052 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/effb39d8-ef30-45f3-bf93-b9dbb8de2475-config podName:effb39d8-ef30-45f3-bf93-b9dbb8de2475 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.241038158 +0000 UTC m=+151.397014287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/effb39d8-ef30-45f3-bf93-b9dbb8de2475-config") pod "kube-controller-manager-operator-78b949d7b-2nxxl" (UID: "effb39d8-ef30-45f3-bf93-b9dbb8de2475") : failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.741057 5010 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.741123 5010 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.741199 5010 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.741139 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-operator-metrics podName:1b5592be-8839-4660-a4c4-ab662fc975eb nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.24111936 +0000 UTC m=+151.397095489 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-operator-metrics") pod "marketplace-operator-79b997595-6kg4f" (UID: "1b5592be-8839-4660-a4c4-ab662fc975eb") : failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.741304 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-signing-cabundle podName:d882e1bb-7ece-45ea-9e5e-0d23f162f06e nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.241281505 +0000 UTC m=+151.397257714 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-signing-cabundle") pod "service-ca-9c57cc56f-c9t7q" (UID: "d882e1bb-7ece-45ea-9e5e-0d23f162f06e") : failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.741326 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fed3a51-8c05-46a7-8057-6839f70b2f22-certs podName:9fed3a51-8c05-46a7-8057-6839f70b2f22 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.241315656 +0000 UTC m=+151.397291915 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/9fed3a51-8c05-46a7-8057-6839f70b2f22-certs") pod "machine-config-server-77jcb" (UID: "9fed3a51-8c05-46a7-8057-6839f70b2f22") : failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.744020 5010 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.744086 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-trusted-ca podName:1b5592be-8839-4660-a4c4-ab662fc975eb nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.244070214 +0000 UTC m=+151.400046423 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-trusted-ca") pod "marketplace-operator-79b997595-6kg4f" (UID: "1b5592be-8839-4660-a4c4-ab662fc975eb") : failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.745682 5010 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.745772 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b9c4aab-790c-4581-bfc2-ad1d7302c704-config-volume podName:9b9c4aab-790c-4581-bfc2-ad1d7302c704 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.245752192 +0000 UTC m=+151.401728321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9b9c4aab-790c-4581-bfc2-ad1d7302c704-config-volume") pod "collect-profiles-29501880-x6pjp" (UID: "9b9c4aab-790c-4581-bfc2-ad1d7302c704") : failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.746783 5010 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.746798 5010 secret.go:188] Couldn't get secret openshift-config-operator/config-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.746866 5010 secret.go:188] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.746848 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51fcb019-af4d-4f3d-b1b0-4b4e6761db7c-serving-cert podName:51fcb019-af4d-4f3d-b1b0-4b4e6761db7c nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.246836043 +0000 UTC m=+151.402812252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/51fcb019-af4d-4f3d-b1b0-4b4e6761db7c-serving-cert") pod "openshift-config-operator-7777fb866f-cp6s5" (UID: "51fcb019-af4d-4f3d-b1b0-4b4e6761db7c") : failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.746929 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/98d0bd22-70a8-4496-9074-3251c15e5b59-config podName:98d0bd22-70a8-4496-9074-3251c15e5b59 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.246902885 +0000 UTC m=+151.402879104 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/98d0bd22-70a8-4496-9074-3251c15e5b59-config") pod "openshift-controller-manager-operator-756b6f6bc6-m76db" (UID: "98d0bd22-70a8-4496-9074-3251c15e5b59") : failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.746945 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/effb39d8-ef30-45f3-bf93-b9dbb8de2475-serving-cert podName:effb39d8-ef30-45f3-bf93-b9dbb8de2475 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.246939716 +0000 UTC m=+151.402915845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/effb39d8-ef30-45f3-bf93-b9dbb8de2475-serving-cert") pod "kube-controller-manager-operator-78b949d7b-2nxxl" (UID: "effb39d8-ef30-45f3-bf93-b9dbb8de2475") : failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.747456 5010 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.747514 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-signing-key podName:d882e1bb-7ece-45ea-9e5e-0d23f162f06e nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.247494742 +0000 UTC m=+151.403470941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-signing-key") pod "service-ca-9c57cc56f-c9t7q" (UID: "d882e1bb-7ece-45ea-9e5e-0d23f162f06e") : failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.750112 5010 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.748640 5010 secret.go:188] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.750191 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/433ae711-459e-4627-83c1-0fecfe929c60-etcd-serving-ca podName:433ae711-459e-4627-83c1-0fecfe929c60 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.250165577 +0000 UTC m=+151.406141706 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/433ae711-459e-4627-83c1-0fecfe929c60-etcd-serving-ca") pod "apiserver-7bbb656c7d-snrzp" (UID: "433ae711-459e-4627-83c1-0fecfe929c60") : failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.750229 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98d0bd22-70a8-4496-9074-3251c15e5b59-serving-cert podName:98d0bd22-70a8-4496-9074-3251c15e5b59 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.250202058 +0000 UTC m=+151.406178187 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/98d0bd22-70a8-4496-9074-3251c15e5b59-serving-cert") pod "openshift-controller-manager-operator-756b6f6bc6-m76db" (UID: "98d0bd22-70a8-4496-9074-3251c15e5b59") : failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.750350 5010 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.750400 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fed3a51-8c05-46a7-8057-6839f70b2f22-node-bootstrap-token podName:9fed3a51-8c05-46a7-8057-6839f70b2f22 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.250382654 +0000 UTC m=+151.406358783 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/9fed3a51-8c05-46a7-8057-6839f70b2f22-node-bootstrap-token") pod "machine-config-server-77jcb" (UID: "9fed3a51-8c05-46a7-8057-6839f70b2f22") : failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.751688 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.762922 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.763564 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.263548318 +0000 UTC m=+151.419524447 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.772203 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.791848 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.812188 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.831974 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.852950 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.864277 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.864489 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.364458027 +0000 UTC m=+151.520434156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.864972 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.865472 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.365462426 +0000 UTC m=+151.521438555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.872592 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.892009 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.912314 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.931917 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.951781 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.966169 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.966832 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.466814077 +0000 UTC m=+151.622790206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.971467 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.993361 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.011836 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.032299 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.052540 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.067666 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.068324 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.568310263 +0000 UTC m=+151.724286392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.072295 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.091722 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.112136 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.132242 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.152104 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.169587 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.169864 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-images\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.170492 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.670474457 +0000 UTC m=+151.826450586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.172138 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.192069 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.214616 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.231639 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.264437 5010 csr.go:261] certificate signing request csr-55hvk is approved, waiting to be issued Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.264609 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.267798 5010 csr.go:257] certificate signing request csr-55hvk is issued Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.271949 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b9c4aab-790c-4581-bfc2-ad1d7302c704-config-volume\") pod \"collect-profiles-29501880-x6pjp\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272040 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51fcb019-af4d-4f3d-b1b0-4b4e6761db7c-serving-cert\") pod \"openshift-config-operator-7777fb866f-cp6s5\" (UID: \"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272066 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d0bd22-70a8-4496-9074-3251c15e5b59-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m76db\" (UID: \"98d0bd22-70a8-4496-9074-3251c15e5b59\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272120 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/effb39d8-ef30-45f3-bf93-b9dbb8de2475-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2nxxl\" (UID: \"effb39d8-ef30-45f3-bf93-b9dbb8de2475\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272173 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-signing-key\") pod \"service-ca-9c57cc56f-c9t7q\" (UID: \"d882e1bb-7ece-45ea-9e5e-0d23f162f06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272207 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98d0bd22-70a8-4496-9074-3251c15e5b59-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m76db\" (UID: \"98d0bd22-70a8-4496-9074-3251c15e5b59\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272251 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9fed3a51-8c05-46a7-8057-6839f70b2f22-node-bootstrap-token\") pod \"machine-config-server-77jcb\" (UID: \"9fed3a51-8c05-46a7-8057-6839f70b2f22\") " pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272277 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/433ae711-459e-4627-83c1-0fecfe929c60-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272384 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272409 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6kg4f\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272447 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/effb39d8-ef30-45f3-bf93-b9dbb8de2475-config\") pod \"kube-controller-manager-operator-78b949d7b-2nxxl\" (UID: \"effb39d8-ef30-45f3-bf93-b9dbb8de2475\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272504 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9fed3a51-8c05-46a7-8057-6839f70b2f22-certs\") pod \"machine-config-server-77jcb\" (UID: \"9fed3a51-8c05-46a7-8057-6839f70b2f22\") " pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272538 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-signing-cabundle\") pod \"service-ca-9c57cc56f-c9t7q\" (UID: \"d882e1bb-7ece-45ea-9e5e-0d23f162f06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272689 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6kg4f\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.274555 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6kg4f\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272411 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.274646 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/433ae711-459e-4627-83c1-0fecfe929c60-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.275346 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/effb39d8-ef30-45f3-bf93-b9dbb8de2475-config\") pod \"kube-controller-manager-operator-78b949d7b-2nxxl\" (UID: \"effb39d8-ef30-45f3-bf93-b9dbb8de2475\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.275602 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.775589706 +0000 UTC m=+151.931565835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.275729 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-signing-cabundle\") pod \"service-ca-9c57cc56f-c9t7q\" (UID: \"d882e1bb-7ece-45ea-9e5e-0d23f162f06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.276606 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b9c4aab-790c-4581-bfc2-ad1d7302c704-config-volume\") pod \"collect-profiles-29501880-x6pjp\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.276838 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6kg4f\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.277614 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-signing-key\") pod \"service-ca-9c57cc56f-c9t7q\" (UID: \"d882e1bb-7ece-45ea-9e5e-0d23f162f06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.278557 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51fcb019-af4d-4f3d-b1b0-4b4e6761db7c-serving-cert\") pod \"openshift-config-operator-7777fb866f-cp6s5\" (UID: \"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.279897 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98d0bd22-70a8-4496-9074-3251c15e5b59-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m76db\" (UID: \"98d0bd22-70a8-4496-9074-3251c15e5b59\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.281745 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/effb39d8-ef30-45f3-bf93-b9dbb8de2475-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2nxxl\" (UID: \"effb39d8-ef30-45f3-bf93-b9dbb8de2475\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.283311 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d0bd22-70a8-4496-9074-3251c15e5b59-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m76db\" (UID: \"98d0bd22-70a8-4496-9074-3251c15e5b59\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.312765 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.333265 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" event={"ID":"cf586c8c-c859-44a2-9b28-16708745cda1","Type":"ContainerStarted","Data":"0c60082eb619569985a7b2e18cf2135863bc46259049f7f4275c8afcc02527da"} Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.333345 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" event={"ID":"cf586c8c-c859-44a2-9b28-16708745cda1","Type":"ContainerStarted","Data":"cb8e9772c3be3366496706d93d1c3728a070d0862f81a47c07e5217ceaa40dc2"} Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.335098 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" event={"ID":"23cdf53e-881f-4cf2-b557-e087a017b7ec","Type":"ContainerStarted","Data":"dc4a6ea017a4a42cc8306e1e9e833360ad98ccb50390758b6349fe4e14a23f36"} Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.335980 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.340207 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.340375 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.348629 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9fed3a51-8c05-46a7-8057-6839f70b2f22-node-bootstrap-token\") pod \"machine-config-server-77jcb\" (UID: \"9fed3a51-8c05-46a7-8057-6839f70b2f22\") " pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.352513 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.359471 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9fed3a51-8c05-46a7-8057-6839f70b2f22-certs\") pod \"machine-config-server-77jcb\" (UID: \"9fed3a51-8c05-46a7-8057-6839f70b2f22\") " pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.374198 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.374430 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.874378145 +0000 UTC m=+152.030354284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.374894 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.375264 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.875243519 +0000 UTC m=+152.031219648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.396306 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk877\" (UniqueName: \"kubernetes.io/projected/8f59fb23-ca1e-487d-a345-9eada8d1c7a8-kube-api-access-fk877\") pod \"cluster-image-registry-operator-dc59b4c8b-bd2tr\" (UID: \"8f59fb23-ca1e-487d-a345-9eada8d1c7a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.411412 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-bound-sa-token\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.426148 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgh4v\" (UniqueName: \"kubernetes.io/projected/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-kube-api-access-dgh4v\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.450107 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwhnr\" (UniqueName: \"kubernetes.io/projected/5a475011-4dc0-4490-829a-8016f3b0e8a2-kube-api-access-vwhnr\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.476660 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.476789 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.976769066 +0000 UTC m=+152.132745195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.476963 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.477309 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.977299011 +0000 UTC m=+152.133275140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.477860 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f59fb23-ca1e-487d-a345-9eada8d1c7a8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bd2tr\" (UID: \"8f59fb23-ca1e-487d-a345-9eada8d1c7a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.492026 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v69f4\" (UniqueName: \"kubernetes.io/projected/2e96179c-7517-40d5-918f-1fc379e16fec-kube-api-access-v69f4\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.511702 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfwvg\" (UniqueName: \"kubernetes.io/projected/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-kube-api-access-kfwvg\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.527431 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s54b\" (UniqueName: \"kubernetes.io/projected/291724bc-0382-45d5-a089-356f8e04feb5-kube-api-access-8s54b\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.550393 5010 request.go:700] Waited for 1.905945057s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.552714 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc6wt\" (UniqueName: \"kubernetes.io/projected/45194a2a-320c-439d-9070-2c534070b7e4-kube-api-access-dc6wt\") pod \"dns-operator-744455d44c-7ztl2\" (UID: \"45194a2a-320c-439d-9070-2c534070b7e4\") " pod="openshift-dns-operator/dns-operator-744455d44c-7ztl2" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.573161 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqkpg\" (UniqueName: \"kubernetes.io/projected/ad56317f-8d37-4d59-9abe-346b4340a30c-kube-api-access-lqkpg\") pod \"cluster-samples-operator-665b6dd947-8qfbt\" (UID: \"ad56317f-8d37-4d59-9abe-346b4340a30c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.578587 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.578767 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:42.078717874 +0000 UTC m=+152.234694013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.579089 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.579159 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.579565 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:42.079553458 +0000 UTC m=+152.235529657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.590470 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf8k7\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-kube-api-access-mf8k7\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.597977 5010 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.601640 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.611817 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.634330 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.652421 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.657319 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.673057 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.680584 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.680816 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:42.180784706 +0000 UTC m=+152.336760835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.681229 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.681819 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:42.181803305 +0000 UTC m=+152.337779434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.691595 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.702087 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-7ztl2" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.713058 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.727878 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.745496 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.748347 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.756225 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.759369 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.780695 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.782456 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.782919 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:42.282905819 +0000 UTC m=+152.438881948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.848436 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cde7673b-c4b1-4060-86cd-cac7120de9bf-bound-sa-token\") pod \"ingress-operator-5b745b69d9-b78vw\" (UID: \"cde7673b-c4b1-4060-86cd-cac7120de9bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.853065 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdntk\" (UniqueName: \"kubernetes.io/projected/4da6d2c9-755f-44e5-bab0-37cf60ee8378-kube-api-access-gdntk\") pod \"console-operator-58897d9998-ljpd5\" (UID: \"4da6d2c9-755f-44e5-bab0-37cf60ee8378\") " pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.884036 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d5tz\" (UniqueName: \"kubernetes.io/projected/d8101cd0-5430-4786-bf8a-3d9c60ad1f7d-kube-api-access-5d5tz\") pod \"downloads-7954f5f757-jvtp4\" (UID: \"d8101cd0-5430-4786-bf8a-3d9c60ad1f7d\") " pod="openshift-console/downloads-7954f5f757-jvtp4" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.884716 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.885185 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:42.385170297 +0000 UTC m=+152.541146426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.899757 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b075f5c7-f95f-4883-8d94-d1b64bc3c451-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxlln\" (UID: \"b075f5c7-f95f-4883-8d94-d1b64bc3c451\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.904929 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-wtcpj"] Feb 03 10:04:41 crc kubenswrapper[5010]: W0203 10:04:41.923680 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61f7221f_b9e1_45bc_8a9e_2f512c9e457d.slice/crio-e28ff007b543d7700a90a71c76b34e3da1bf25749689935b2de9d5cc48606a37 WatchSource:0}: Error finding container e28ff007b543d7700a90a71c76b34e3da1bf25749689935b2de9d5cc48606a37: Status 404 returned error can't find the container with id e28ff007b543d7700a90a71c76b34e3da1bf25749689935b2de9d5cc48606a37 Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.931741 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8lhm\" (UniqueName: \"kubernetes.io/projected/c07afc79-e943-4e79-93ed-8eedd0ade1bc-kube-api-access-q8lhm\") pod \"multus-admission-controller-857f4d67dd-x7hq6\" (UID: \"c07afc79-e943-4e79-93ed-8eedd0ade1bc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-x7hq6" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.954378 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfsz9\" (UniqueName: \"kubernetes.io/projected/9b9c4aab-790c-4581-bfc2-ad1d7302c704-kube-api-access-qfsz9\") pod \"collect-profiles-29501880-x6pjp\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.954378 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c6x9\" (UniqueName: \"kubernetes.io/projected/ba766e4c-056f-4be6-a4b9-05592b641f87-kube-api-access-8c6x9\") pod \"control-plane-machine-set-operator-78cbb6b69f-xcpwg\" (UID: \"ba766e4c-056f-4be6-a4b9-05592b641f87\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.964635 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.965979 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt"] Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.978548 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gskkj\" (UniqueName: \"kubernetes.io/projected/2f2ac3f6-ed20-4205-9dfd-ce6d76269c26-kube-api-access-gskkj\") pod \"machine-config-controller-84d6567774-bh4wr\" (UID: \"2f2ac3f6-ed20-4205-9dfd-ce6d76269c26\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.990067 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.990493 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:42.490478311 +0000 UTC m=+152.646454440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.993852 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxl5b\" (UniqueName: \"kubernetes.io/projected/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-kube-api-access-nxl5b\") pod \"service-ca-9c57cc56f-c9t7q\" (UID: \"d882e1bb-7ece-45ea-9e5e-0d23f162f06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.038924 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv6sx\" (UniqueName: \"kubernetes.io/projected/9cddf065-d958-4bf4-b5a8-67321cba2f67-kube-api-access-tv6sx\") pod \"catalog-operator-68c6474976-65mrf\" (UID: \"9cddf065-d958-4bf4-b5a8-67321cba2f67\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.054736 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77bnx\" (UniqueName: \"kubernetes.io/projected/98d0bd22-70a8-4496-9074-3251c15e5b59-kube-api-access-77bnx\") pod \"openshift-controller-manager-operator-756b6f6bc6-m76db\" (UID: \"98d0bd22-70a8-4496-9074-3251c15e5b59\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.056609 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdssv\" (UniqueName: \"kubernetes.io/projected/58ae0ba7-4454-4bec-87ac-432b346ee643-kube-api-access-pdssv\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.082477 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.089730 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jvtp4" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.093748 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:42 crc kubenswrapper[5010]: E0203 10:04:42.094146 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:42.594133908 +0000 UTC m=+152.750110037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.095054 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmnts\" (UniqueName: \"kubernetes.io/projected/1b5592be-8839-4660-a4c4-ab662fc975eb-kube-api-access-pmnts\") pod \"marketplace-operator-79b997595-6kg4f\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.100071 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rkqd6"] Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.103464 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftpgf\" (UniqueName: \"kubernetes.io/projected/9fed3a51-8c05-46a7-8057-6839f70b2f22-kube-api-access-ftpgf\") pod \"machine-config-server-77jcb\" (UID: \"9fed3a51-8c05-46a7-8057-6839f70b2f22\") " pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.103758 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.122302 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zpjj\" (UniqueName: \"kubernetes.io/projected/cde7673b-c4b1-4060-86cd-cac7120de9bf-kube-api-access-9zpjj\") pod \"ingress-operator-5b745b69d9-b78vw\" (UID: \"cde7673b-c4b1-4060-86cd-cac7120de9bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.126418 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.139981 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.144017 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72kh9\" (UniqueName: \"kubernetes.io/projected/ec11c4de-b7ae-4b50-ab95-20be670ab6e8-kube-api-access-72kh9\") pod \"openshift-apiserver-operator-796bbdcf4f-fs75k\" (UID: \"ec11c4de-b7ae-4b50-ab95-20be670ab6e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.149816 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.153899 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bh9q\" (UniqueName: \"kubernetes.io/projected/0c3f3f4e-122f-40b8-a3f1-d868a36640a1-kube-api-access-4bh9q\") pod \"migrator-59844c95c7-j4pcf\" (UID: \"0c3f3f4e-122f-40b8-a3f1-d868a36640a1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4pcf" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.156183 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6t4bv"] Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.168349 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-x7hq6" Feb 03 10:04:42 crc kubenswrapper[5010]: E0203 10:04:42.171381 5010 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:42 crc kubenswrapper[5010]: E0203 10:04:42.171449 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-images podName:dc73dc6e-53ff-48b8-932e-d5aeb839f2dd nodeName:}" failed. No retries permitted until 2026-02-03 10:04:43.171429116 +0000 UTC m=+153.327405245 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-images") pod "machine-api-operator-5694c8668f-5mq4r" (UID: "dc73dc6e-53ff-48b8-932e-d5aeb839f2dd") : failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.181734 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7ztl2"] Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.197704 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrlg8\" (UniqueName: \"kubernetes.io/projected/e9dc4ca7-8fe2-4479-989b-0cc98c651c96-kube-api-access-rrlg8\") pod \"service-ca-operator-777779d784-hwrkh\" (UID: \"e9dc4ca7-8fe2-4479-989b-0cc98c651c96\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" Feb 03 10:04:42 crc kubenswrapper[5010]: W0203 10:04:42.197817 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a475011_4dc0_4490_829a_8016f3b0e8a2.slice/crio-f8f57db6b0062ed4b61ecab8e52afe31f6118dd660c843052c1d2ff893b91694 WatchSource:0}: Error finding container f8f57db6b0062ed4b61ecab8e52afe31f6118dd660c843052c1d2ff893b91694: Status 404 returned error can't find the container with id f8f57db6b0062ed4b61ecab8e52afe31f6118dd660c843052c1d2ff893b91694 Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.198280 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:42 crc kubenswrapper[5010]: E0203 10:04:42.198575 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:42.698559037 +0000 UTC m=+152.854535166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.198670 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.210898 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2n5v\" (UniqueName: \"kubernetes.io/projected/b693a4b6-8aa6-489e-a797-fa486eab7443-kube-api-access-l2n5v\") pod \"packageserver-d55dfcdfc-5v56r\" (UID: \"b693a4b6-8aa6-489e-a797-fa486eab7443\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.214739 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.223901 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.225768 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bkdmn"] Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.233881 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.234606 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml6zh\" (UniqueName: \"kubernetes.io/projected/51fcb019-af4d-4f3d-b1b0-4b4e6761db7c-kube-api-access-ml6zh\") pod \"openshift-config-operator-7777fb866f-cp6s5\" (UID: \"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.252274 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqs8s\" (UniqueName: \"kubernetes.io/projected/1b8cbffa-cf1a-4658-bd1b-7e7323449bf3-kube-api-access-jqs8s\") pod \"machine-config-operator-74547568cd-zwvcg\" (UID: \"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.252750 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4pcf" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.254471 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97kl8\" (UniqueName: \"kubernetes.io/projected/df4fd08a-dcc8-4d5c-95ad-9a3542df3233-kube-api-access-97kl8\") pod \"olm-operator-6b444d44fb-sgfk5\" (UID: \"df4fd08a-dcc8-4d5c-95ad-9a3542df3233\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.258124 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr"] Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.274052 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-03 09:59:41 +0000 UTC, rotation deadline is 2026-11-08 08:15:03.034348865 +0000 UTC Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.274092 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6670h10m20.760260291s for next certificate rotation Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.274096 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.281936 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.293198 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwxm6\" (UniqueName: \"kubernetes.io/projected/4ddcb32c-fe4a-4f24-bc77-d6bc56562d75-kube-api-access-bwxm6\") pod \"package-server-manager-789f6589d5-pnt99\" (UID: \"4ddcb32c-fe4a-4f24-bc77-d6bc56562d75\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.298081 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcflf\" (UniqueName: \"kubernetes.io/projected/433ae711-459e-4627-83c1-0fecfe929c60-kube-api-access-jcflf\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.301072 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:42 crc kubenswrapper[5010]: E0203 10:04:42.301463 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:42.801450082 +0000 UTC m=+152.957426221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.302816 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.311327 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.318612 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.333122 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7xxg\" (UniqueName: \"kubernetes.io/projected/6e12e505-3d35-4b3e-8015-9e2341d4791e-kube-api-access-j7xxg\") pod \"kube-storage-version-migrator-operator-b67b599dd-68xdt\" (UID: \"6e12e505-3d35-4b3e-8015-9e2341d4791e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.337038 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zhrgt\" (UID: \"f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.359515 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.365520 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/effb39d8-ef30-45f3-bf93-b9dbb8de2475-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2nxxl\" (UID: \"effb39d8-ef30-45f3-bf93-b9dbb8de2475\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.386508 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wtcpj" event={"ID":"61f7221f-b9e1-45bc-8a9e-2f512c9e457d","Type":"ContainerStarted","Data":"f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa"} Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.386558 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wtcpj" event={"ID":"61f7221f-b9e1-45bc-8a9e-2f512c9e457d","Type":"ContainerStarted","Data":"e28ff007b543d7700a90a71c76b34e3da1bf25749689935b2de9d5cc48606a37"} Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.388300 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7ztl2" event={"ID":"45194a2a-320c-439d-9070-2c534070b7e4","Type":"ContainerStarted","Data":"7c633523ca54953ccddd00a9ec430ee25964e92694e716a35026049bf91cbdb7"} Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.388981 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" event={"ID":"5a475011-4dc0-4490-829a-8016f3b0e8a2","Type":"ContainerStarted","Data":"f8f57db6b0062ed4b61ecab8e52afe31f6118dd660c843052c1d2ff893b91694"} Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.395054 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" event={"ID":"2e96179c-7517-40d5-918f-1fc379e16fec","Type":"ContainerStarted","Data":"1b7d2cfbbe1ad8dcf31cb2fe132275f407edd85657f502e7daf7eb1bd7ce0447"} Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.395329 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.401970 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.402196 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d5af6be2-06e9-4fbc-a138-ada090853bc7-cert\") pod \"ingress-canary-vxx8p\" (UID: \"d5af6be2-06e9-4fbc-a138-ada090853bc7\") " pod="openshift-ingress-canary/ingress-canary-vxx8p" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.402239 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-socket-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.402278 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-plugins-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.402357 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nb7d\" (UniqueName: \"kubernetes.io/projected/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-kube-api-access-2nb7d\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.402498 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-mountpoint-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.402552 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng6cr\" (UniqueName: \"kubernetes.io/projected/a3d78816-3c67-4a17-8951-b605e971aa3b-kube-api-access-ng6cr\") pod \"dns-default-m4jjq\" (UID: \"a3d78816-3c67-4a17-8951-b605e971aa3b\") " pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.402644 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-registration-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.402663 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr725\" (UniqueName: \"kubernetes.io/projected/d5af6be2-06e9-4fbc-a138-ada090853bc7-kube-api-access-cr725\") pod \"ingress-canary-vxx8p\" (UID: \"d5af6be2-06e9-4fbc-a138-ada090853bc7\") " pod="openshift-ingress-canary/ingress-canary-vxx8p" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.402677 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-csi-data-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.402702 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a3d78816-3c67-4a17-8951-b605e971aa3b-metrics-tls\") pod \"dns-default-m4jjq\" (UID: \"a3d78816-3c67-4a17-8951-b605e971aa3b\") " pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.402717 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3d78816-3c67-4a17-8951-b605e971aa3b-config-volume\") pod \"dns-default-m4jjq\" (UID: \"a3d78816-3c67-4a17-8951-b605e971aa3b\") " pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:42 crc kubenswrapper[5010]: E0203 10:04:42.403058 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:42.90303712 +0000 UTC m=+153.059013249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.427056 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt" event={"ID":"ad56317f-8d37-4d59-9abe-346b4340a30c","Type":"ContainerStarted","Data":"b8bd4f5410b30f93f712b765a574503f90e387b8be3bfc0b76454a7e6cf020f2"} Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.427745 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.428055 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.455706 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.483694 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.496415 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp"] Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.509981 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.514043 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-mountpoint-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.514137 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng6cr\" (UniqueName: \"kubernetes.io/projected/a3d78816-3c67-4a17-8951-b605e971aa3b-kube-api-access-ng6cr\") pod \"dns-default-m4jjq\" (UID: \"a3d78816-3c67-4a17-8951-b605e971aa3b\") " pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.514204 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.514399 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-registration-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.514426 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr725\" (UniqueName: \"kubernetes.io/projected/d5af6be2-06e9-4fbc-a138-ada090853bc7-kube-api-access-cr725\") pod \"ingress-canary-vxx8p\" (UID: \"d5af6be2-06e9-4fbc-a138-ada090853bc7\") " pod="openshift-ingress-canary/ingress-canary-vxx8p" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.514440 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-csi-data-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.514474 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a3d78816-3c67-4a17-8951-b605e971aa3b-metrics-tls\") pod \"dns-default-m4jjq\" (UID: \"a3d78816-3c67-4a17-8951-b605e971aa3b\") " pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.514532 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3d78816-3c67-4a17-8951-b605e971aa3b-config-volume\") pod \"dns-default-m4jjq\" (UID: \"a3d78816-3c67-4a17-8951-b605e971aa3b\") " pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.522431 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d5af6be2-06e9-4fbc-a138-ada090853bc7-cert\") pod \"ingress-canary-vxx8p\" (UID: \"d5af6be2-06e9-4fbc-a138-ada090853bc7\") " pod="openshift-ingress-canary/ingress-canary-vxx8p" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.522684 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-socket-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.523038 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-plugins-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.536741 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nb7d\" (UniqueName: \"kubernetes.io/projected/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-kube-api-access-2nb7d\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.537686 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-mountpoint-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: E0203 10:04:42.548987 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:43.048968299 +0000 UTC m=+153.204944428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.563936 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-registration-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.585694 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-socket-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.586206 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-csi-data-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.587193 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-plugins-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.587414 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.589002 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3d78816-3c67-4a17-8951-b605e971aa3b-config-volume\") pod \"dns-default-m4jjq\" (UID: \"a3d78816-3c67-4a17-8951-b605e971aa3b\") " pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.590649 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d5af6be2-06e9-4fbc-a138-ada090853bc7-cert\") pod \"ingress-canary-vxx8p\" (UID: \"d5af6be2-06e9-4fbc-a138-ada090853bc7\") " pod="openshift-ingress-canary/ingress-canary-vxx8p" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.591923 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.595566 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nb7d\" (UniqueName: \"kubernetes.io/projected/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-kube-api-access-2nb7d\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.605415 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a3d78816-3c67-4a17-8951-b605e971aa3b-metrics-tls\") pod \"dns-default-m4jjq\" (UID: \"a3d78816-3c67-4a17-8951-b605e971aa3b\") " pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.607047 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr725\" (UniqueName: \"kubernetes.io/projected/d5af6be2-06e9-4fbc-a138-ada090853bc7-kube-api-access-cr725\") pod \"ingress-canary-vxx8p\" (UID: \"d5af6be2-06e9-4fbc-a138-ada090853bc7\") " pod="openshift-ingress-canary/ingress-canary-vxx8p" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.611880 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng6cr\" (UniqueName: \"kubernetes.io/projected/a3d78816-3c67-4a17-8951-b605e971aa3b-kube-api-access-ng6cr\") pod \"dns-default-m4jjq\" (UID: \"a3d78816-3c67-4a17-8951-b605e971aa3b\") " pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.637952 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:42 crc kubenswrapper[5010]: E0203 10:04:42.638237 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:43.138206677 +0000 UTC m=+153.294182806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.648481 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln"] Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.648704 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.732013 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vxx8p" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.739509 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:42 crc kubenswrapper[5010]: E0203 10:04:42.739873 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:43.239860457 +0000 UTC m=+153.395836586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.777594 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.799327 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg"] Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.851500 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:42 crc kubenswrapper[5010]: E0203 10:04:42.852111 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:43.352093098 +0000 UTC m=+153.508069227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.869433 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" podStartSLOduration=127.86941018 podStartE2EDuration="2m7.86941018s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:42.86905914 +0000 UTC m=+153.025035269" watchObservedRunningTime="2026-02-03 10:04:42.86941018 +0000 UTC m=+153.025386309" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.953010 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:42 crc kubenswrapper[5010]: E0203 10:04:42.953364 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:43.453349356 +0000 UTC m=+153.609325485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.992564 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" podStartSLOduration=127.99252343 podStartE2EDuration="2m7.99252343s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:42.990491822 +0000 UTC m=+153.146467951" watchObservedRunningTime="2026-02-03 10:04:42.99252343 +0000 UTC m=+153.148499559" Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.025515 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" podStartSLOduration=128.025500818 podStartE2EDuration="2m8.025500818s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:43.024496639 +0000 UTC m=+153.180472758" watchObservedRunningTime="2026-02-03 10:04:43.025500818 +0000 UTC m=+153.181476947" Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.055018 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:43 crc kubenswrapper[5010]: E0203 10:04:43.055394 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:43.555380407 +0000 UTC m=+153.711356536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.105074 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" podStartSLOduration=127.105037519 podStartE2EDuration="2m7.105037519s" podCreationTimestamp="2026-02-03 10:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:43.104662828 +0000 UTC m=+153.260638957" watchObservedRunningTime="2026-02-03 10:04:43.105037519 +0000 UTC m=+153.261013658" Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.156355 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:43 crc kubenswrapper[5010]: E0203 10:04:43.156626 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:43.656615726 +0000 UTC m=+153.812591855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.257615 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:43 crc kubenswrapper[5010]: E0203 10:04:43.257787 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:43.757756301 +0000 UTC m=+153.913732440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.258203 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-images\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.258337 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:43 crc kubenswrapper[5010]: E0203 10:04:43.259170 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:43.759147531 +0000 UTC m=+153.915123660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.261519 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-images\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.359654 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:43 crc kubenswrapper[5010]: E0203 10:04:43.361850 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:43.861812169 +0000 UTC m=+154.017788318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.461751 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:43 crc kubenswrapper[5010]: E0203 10:04:43.462012 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:43.962001558 +0000 UTC m=+154.117977687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.467531 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.474794 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" event={"ID":"291724bc-0382-45d5-a089-356f8e04feb5","Type":"ContainerStarted","Data":"d09b6b5f9ac6bd18361a9402bf1dca7d0a94a47065f382b54d94d62e893c1442"} Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.475455 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" event={"ID":"8f59fb23-ca1e-487d-a345-9eada8d1c7a8","Type":"ContainerStarted","Data":"d399b1c5a3f43e58fedc7b9a0a08aed708e61a8d74d46b2f172ad28150ef8e77"} Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.476875 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" event={"ID":"9b9c4aab-790c-4581-bfc2-ad1d7302c704","Type":"ContainerStarted","Data":"68feaa08ed8d91769630ca032dc73a0d3797e1b08b8b7690cc25c9c07a16da2d"} Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.477773 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt" event={"ID":"ad56317f-8d37-4d59-9abe-346b4340a30c","Type":"ContainerStarted","Data":"43e7a9a88e3189f6d03a24d82d6bf5772d80eb44d7e35ef9262d2307d16d642e"} Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.479410 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-whpdl" event={"ID":"58ae0ba7-4454-4bec-87ac-432b346ee643","Type":"ContainerStarted","Data":"5dc9dea6bb83b5aa1a5dc6a32b24b5130b67e717def7180825c1220d656eae5f"} Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.487919 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-77jcb" event={"ID":"9fed3a51-8c05-46a7-8057-6839f70b2f22","Type":"ContainerStarted","Data":"ae890a1155114474ca855d42e61d125728f56e3c0bdaf5cc6c93ab0eda43bc46"} Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.489258 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-wtcpj" podStartSLOduration=128.489241892 podStartE2EDuration="2m8.489241892s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:43.488311056 +0000 UTC m=+153.644287195" watchObservedRunningTime="2026-02-03 10:04:43.489241892 +0000 UTC m=+153.645218021" Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.494722 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" event={"ID":"b075f5c7-f95f-4883-8d94-d1b64bc3c451","Type":"ContainerStarted","Data":"c226bd811c14d9f2781ff06c9170ca96b94d7443a0abb725c369539becb8c659"} Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.496109 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg" event={"ID":"ba766e4c-056f-4be6-a4b9-05592b641f87","Type":"ContainerStarted","Data":"b3bf5d30070b3fb5585bd35ae1024f758c653218b59c4571de8b3db3f4707cdb"} Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.512956 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh"] Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.562658 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:43 crc kubenswrapper[5010]: E0203 10:04:43.563751 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:44.06373552 +0000 UTC m=+154.219711649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.665587 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:43 crc kubenswrapper[5010]: E0203 10:04:43.667111 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:44.167072948 +0000 UTC m=+154.323049077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.768273 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:43 crc kubenswrapper[5010]: E0203 10:04:43.768708 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:44.268687317 +0000 UTC m=+154.424663456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.768962 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:43 crc kubenswrapper[5010]: E0203 10:04:43.770291 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:44.269358496 +0000 UTC m=+154.425334625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.877293 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:43 crc kubenswrapper[5010]: E0203 10:04:43.878045 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:44.378031166 +0000 UTC m=+154.534007295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.963535 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw"] Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.978923 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:43 crc kubenswrapper[5010]: E0203 10:04:43.979327 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:44.479315496 +0000 UTC m=+154.635291625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.080704 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:44 crc kubenswrapper[5010]: E0203 10:04:44.080994 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:44.580978485 +0000 UTC m=+154.736954614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.148988 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ljpd5"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.162278 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.166812 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jvtp4"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.182090 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:44 crc kubenswrapper[5010]: E0203 10:04:44.182658 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:44.682647116 +0000 UTC m=+154.838623235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.285443 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:44 crc kubenswrapper[5010]: E0203 10:04:44.285855 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:44.785835819 +0000 UTC m=+154.941811948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.386980 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:44 crc kubenswrapper[5010]: E0203 10:04:44.387653 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:44.887639244 +0000 UTC m=+155.043615373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.431054 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.446085 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.488067 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:44 crc kubenswrapper[5010]: E0203 10:04:44.490805 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:44.988546743 +0000 UTC m=+155.144522872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.511685 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" event={"ID":"2e96179c-7517-40d5-918f-1fc379e16fec","Type":"ContainerStarted","Data":"c06f71b8a3485feb4d4e37099aefa63f8ec2028b510e3bdf44f1b8c79a936b18"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.514345 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" event={"ID":"8f59fb23-ca1e-487d-a345-9eada8d1c7a8","Type":"ContainerStarted","Data":"94a6318a94fadd61ac6fffc64c12d749005bd1f05159ad152119aa6c71e84f25"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.518581 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-77jcb" event={"ID":"9fed3a51-8c05-46a7-8057-6839f70b2f22","Type":"ContainerStarted","Data":"b4b7ea1d93ea8b711f0814bb0c671c1b519562e24821324d368aaf60782de6c2"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.528562 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ljpd5" event={"ID":"4da6d2c9-755f-44e5-bab0-37cf60ee8378","Type":"ContainerStarted","Data":"9bf5ab8173b90fcf1fb1b6b6f0cee7ebde419e996a1afc9923b2156ea4ae9ec5"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.528605 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ljpd5" event={"ID":"4da6d2c9-755f-44e5-bab0-37cf60ee8378","Type":"ContainerStarted","Data":"0a7b830c84f4c17e07abbcd752af7b6757f4601b9486d64923c938a9ea06cb7b"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.529338 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.534697 5010 patch_prober.go:28] interesting pod/console-operator-58897d9998-ljpd5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.534753 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-ljpd5" podUID="4da6d2c9-755f-44e5-bab0-37cf60ee8378" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.534993 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" podStartSLOduration=129.534972333 podStartE2EDuration="2m9.534972333s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.534902251 +0000 UTC m=+154.690878370" watchObservedRunningTime="2026-02-03 10:04:44.534972333 +0000 UTC m=+154.690948452" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.538811 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg" event={"ID":"ba766e4c-056f-4be6-a4b9-05592b641f87","Type":"ContainerStarted","Data":"63417935118a7c173d443e363c6575264227831b1a94822efbc7be942deeeeba"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.549338 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-77jcb" podStartSLOduration=5.54931603 podStartE2EDuration="5.54931603s" podCreationTimestamp="2026-02-03 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.549241328 +0000 UTC m=+154.705217457" watchObservedRunningTime="2026-02-03 10:04:44.54931603 +0000 UTC m=+154.705292159" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.551628 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" event={"ID":"cde7673b-c4b1-4060-86cd-cac7120de9bf","Type":"ContainerStarted","Data":"63fbbf9ee06318f4203063b75261e35739310c9dd1b8622a18a36ebd23fe5276"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.551675 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" event={"ID":"cde7673b-c4b1-4060-86cd-cac7120de9bf","Type":"ContainerStarted","Data":"646277fedd17218abbc0cad255536f07c18ed6906549b680f077c3653992eba5"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.554454 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" event={"ID":"9b9c4aab-790c-4581-bfc2-ad1d7302c704","Type":"ContainerStarted","Data":"15e10260ef913b6b44e27ef0b7816cd144403f167a0779e8880ec7a69901a07c"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.578622 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-ljpd5" podStartSLOduration=129.578596503 podStartE2EDuration="2m9.578596503s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.574137896 +0000 UTC m=+154.730114035" watchObservedRunningTime="2026-02-03 10:04:44.578596503 +0000 UTC m=+154.734572632" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.586661 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt" event={"ID":"ad56317f-8d37-4d59-9abe-346b4340a30c","Type":"ContainerStarted","Data":"561515a2fa3c14b15007ba96e1540c1eea0059aab141c67d389b4f4a91a3b04d"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.589072 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" event={"ID":"b693a4b6-8aa6-489e-a797-fa486eab7443","Type":"ContainerStarted","Data":"3c09bdc0fc16bf94389e2c826ab81bb9d2595ddca0db77387061ad8ac768b3fa"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.590417 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.591864 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" podStartSLOduration=129.59184969 podStartE2EDuration="2m9.59184969s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.590301346 +0000 UTC m=+154.746277475" watchObservedRunningTime="2026-02-03 10:04:44.59184969 +0000 UTC m=+154.747825849" Feb 03 10:04:44 crc kubenswrapper[5010]: E0203 10:04:44.592846 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:45.092831468 +0000 UTC m=+155.248807597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.618722 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" event={"ID":"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c","Type":"ContainerStarted","Data":"7bd5cd5437487cb168e25c92e0417529bd1becc82ce0d8d6889d660ccc99f901"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.628981 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" event={"ID":"b075f5c7-f95f-4883-8d94-d1b64bc3c451","Type":"ContainerStarted","Data":"30c448f2a29441f24eefd6e7d24e4234e2550ba0183f51cdb0b88e4eb91d5b59"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.631302 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt" podStartSLOduration=129.631285101 podStartE2EDuration="2m9.631285101s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.618805186 +0000 UTC m=+154.774781315" watchObservedRunningTime="2026-02-03 10:04:44.631285101 +0000 UTC m=+154.787261230" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.641609 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg" podStartSLOduration=129.641592004 podStartE2EDuration="2m9.641592004s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.640600916 +0000 UTC m=+154.796577065" watchObservedRunningTime="2026-02-03 10:04:44.641592004 +0000 UTC m=+154.797568143" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.644533 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jvtp4" event={"ID":"d8101cd0-5430-4786-bf8a-3d9c60ad1f7d","Type":"ContainerStarted","Data":"3222ee61b2c693351e65e3c9805fb25da78814dc65c7e68669f689bfa569da6e"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.644747 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jvtp4" event={"ID":"d8101cd0-5430-4786-bf8a-3d9c60ad1f7d","Type":"ContainerStarted","Data":"80515b9fecee374b5b46af16212dfe32a98caf42e9abceb1859afbb4272d8ccc"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.645089 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-jvtp4" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.660689 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" podStartSLOduration=129.660669416 podStartE2EDuration="2m9.660669416s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.660633375 +0000 UTC m=+154.816609504" watchObservedRunningTime="2026-02-03 10:04:44.660669416 +0000 UTC m=+154.816645565" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.661618 5010 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvtp4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.661653 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvtp4" podUID="d8101cd0-5430-4786-bf8a-3d9c60ad1f7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.662245 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" event={"ID":"291724bc-0382-45d5-a089-356f8e04feb5","Type":"ContainerStarted","Data":"a8e7175cf248ee167e4bce18e263051f06083ef8fce008268671d4d23c14b09d"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.669550 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" event={"ID":"2f2ac3f6-ed20-4205-9dfd-ce6d76269c26","Type":"ContainerStarted","Data":"797b45f8a292343ce10b798cb89191b38f38d31166c410472811969dfabb16ff"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.669598 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" event={"ID":"2f2ac3f6-ed20-4205-9dfd-ce6d76269c26","Type":"ContainerStarted","Data":"b7b98936602d666e0476c5533daefd7d52973ccc05815593539ba02cb185939b"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.695886 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:44 crc kubenswrapper[5010]: E0203 10:04:44.697164 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:45.197145003 +0000 UTC m=+155.353121132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.697637 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" event={"ID":"5a475011-4dc0-4490-829a-8016f3b0e8a2","Type":"ContainerStarted","Data":"a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.698256 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.698943 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.701828 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.711715 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-whpdl" event={"ID":"58ae0ba7-4454-4bec-87ac-432b346ee643","Type":"ContainerStarted","Data":"a610fbebdc3ffa04f0473e337125d5909ac8c2a69e900a69a42c9394815ffb75"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.719130 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" podStartSLOduration=129.719106298 podStartE2EDuration="2m9.719106298s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.690195976 +0000 UTC m=+154.846172105" watchObservedRunningTime="2026-02-03 10:04:44.719106298 +0000 UTC m=+154.875082427" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.728934 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" event={"ID":"e9dc4ca7-8fe2-4479-989b-0cc98c651c96","Type":"ContainerStarted","Data":"90a8ac0aae794574cdb438e1ebde8fd7ef59d57a49f3d9f4465932e4b5db7b87"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.728985 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" event={"ID":"e9dc4ca7-8fe2-4479-989b-0cc98c651c96","Type":"ContainerStarted","Data":"11cb1b03743b8508d914607707efee374c51c7de656433e84775f38a25f8a0fc"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.729560 5010 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rkqd6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.729609 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" podUID="5a475011-4dc0-4490-829a-8016f3b0e8a2" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.731179 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.734387 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-jvtp4" podStartSLOduration=129.734369832 podStartE2EDuration="2m9.734369832s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.713273082 +0000 UTC m=+154.869249221" watchObservedRunningTime="2026-02-03 10:04:44.734369832 +0000 UTC m=+154.890345961" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.741122 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.745306 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6kg4f"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.749125 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7ztl2" event={"ID":"45194a2a-320c-439d-9070-2c534070b7e4","Type":"ContainerStarted","Data":"fec05ca2955a10df7039d4ef3ec746434bb3f8c492847ff25649e70ce1c6026c"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.749250 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7ztl2" event={"ID":"45194a2a-320c-439d-9070-2c534070b7e4","Type":"ContainerStarted","Data":"373e3b089c1699c0575f908abd08461b75848ab282b800ce89ccc4ab65b90340"} Feb 03 10:04:44 crc kubenswrapper[5010]: W0203 10:04:44.770638 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b5592be_8839_4660_a4c4_ab662fc975eb.slice/crio-2ade3cdf2529ce4152b52a6e4a45299bf6c1e2325f1341f2c73a3d85ad1e71e8 WatchSource:0}: Error finding container 2ade3cdf2529ce4152b52a6e4a45299bf6c1e2325f1341f2c73a3d85ad1e71e8: Status 404 returned error can't find the container with id 2ade3cdf2529ce4152b52a6e4a45299bf6c1e2325f1341f2c73a3d85ad1e71e8 Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.774625 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.781704 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" podStartSLOduration=129.781679027 podStartE2EDuration="2m9.781679027s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.768026479 +0000 UTC m=+154.924002608" watchObservedRunningTime="2026-02-03 10:04:44.781679027 +0000 UTC m=+154.937655176" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.793800 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.801076 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:44 crc kubenswrapper[5010]: E0203 10:04:44.806577 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:45.306558234 +0000 UTC m=+155.462534363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.809104 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-m4jjq"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.850761 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-j4pcf"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.852126 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" podStartSLOduration=129.852105649 podStartE2EDuration="2m9.852105649s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.798359061 +0000 UTC m=+154.954335190" watchObservedRunningTime="2026-02-03 10:04:44.852105649 +0000 UTC m=+155.008081778" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.878053 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-c9t7q"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.892544 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.918470 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.918792 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-whpdl" podStartSLOduration=129.918773514 podStartE2EDuration="2m9.918773514s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.898521659 +0000 UTC m=+155.054497788" watchObservedRunningTime="2026-02-03 10:04:44.918773514 +0000 UTC m=+155.074749643" Feb 03 10:04:44 crc kubenswrapper[5010]: E0203 10:04:44.918956 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:45.418937089 +0000 UTC m=+155.574913218 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.920809 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-x7hq6"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.992028 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-7ztl2" podStartSLOduration=129.992009617 podStartE2EDuration="2m9.992009617s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.981679023 +0000 UTC m=+155.137655152" watchObservedRunningTime="2026-02-03 10:04:44.992009617 +0000 UTC m=+155.147985746" Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.014206 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" podStartSLOduration=129.014189497 podStartE2EDuration="2m9.014189497s" podCreationTimestamp="2026-02-03 10:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:45.007540638 +0000 UTC m=+155.163516767" watchObservedRunningTime="2026-02-03 10:04:45.014189497 +0000 UTC m=+155.170165626" Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.023268 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:45 crc kubenswrapper[5010]: E0203 10:04:45.023682 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:45.523653836 +0000 UTC m=+155.679629965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.026134 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-f9lhg"] Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.040983 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp"] Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.049081 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl"] Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.053824 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vxx8p"] Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.074243 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5mq4r"] Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.102738 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db"] Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.122760 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt"] Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.123801 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:45 crc kubenswrapper[5010]: E0203 10:04:45.124080 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:45.624066761 +0000 UTC m=+155.780042890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.169998 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt"] Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.199597 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.205466 5010 patch_prober.go:28] interesting pod/router-default-5444994796-whpdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 10:04:45 crc kubenswrapper[5010]: [-]has-synced failed: reason withheld Feb 03 10:04:45 crc kubenswrapper[5010]: [+]process-running ok Feb 03 10:04:45 crc kubenswrapper[5010]: healthz check failed Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.205524 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whpdl" podUID="58ae0ba7-4454-4bec-87ac-432b346ee643" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.221235 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99"] Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.224660 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:45 crc kubenswrapper[5010]: E0203 10:04:45.224955 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:45.724942439 +0000 UTC m=+155.880918568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:45 crc kubenswrapper[5010]: W0203 10:04:45.258548 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ddcb32c_fe4a_4f24_bc77_d6bc56562d75.slice/crio-c3eea924367a5036aaeefe59a51974e32e0154c319bec0b602fa06f78f2e5fb8 WatchSource:0}: Error finding container c3eea924367a5036aaeefe59a51974e32e0154c319bec0b602fa06f78f2e5fb8: Status 404 returned error can't find the container with id c3eea924367a5036aaeefe59a51974e32e0154c319bec0b602fa06f78f2e5fb8 Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.326875 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:45 crc kubenswrapper[5010]: E0203 10:04:45.327605 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:45.827584037 +0000 UTC m=+155.983560166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.429117 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:45 crc kubenswrapper[5010]: E0203 10:04:45.429536 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:45.929521116 +0000 UTC m=+156.085497245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.532037 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:45 crc kubenswrapper[5010]: E0203 10:04:45.533683 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:46.033663676 +0000 UTC m=+156.189639805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.637658 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:45 crc kubenswrapper[5010]: E0203 10:04:45.638002 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:46.137987382 +0000 UTC m=+156.293963511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.741780 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:45 crc kubenswrapper[5010]: E0203 10:04:45.741954 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:46.241926998 +0000 UTC m=+156.397903127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.742116 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:45 crc kubenswrapper[5010]: E0203 10:04:45.742485 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:46.242474353 +0000 UTC m=+156.398450482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.828820 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vxx8p" event={"ID":"d5af6be2-06e9-4fbc-a138-ada090853bc7","Type":"ContainerStarted","Data":"37a5bf791d71df2e3c42ff9e212a92bbb51e5c149897cdf18b9c17176163a868"} Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.829143 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vxx8p" event={"ID":"d5af6be2-06e9-4fbc-a138-ada090853bc7","Type":"ContainerStarted","Data":"c99e2accf1e9f7bf057557faccf7afc119f9a4f625184d3dca3aa52fa55e9733"} Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.845913 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:45 crc kubenswrapper[5010]: E0203 10:04:45.846778 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:46.346762538 +0000 UTC m=+156.502738667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.856283 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-vxx8p" podStartSLOduration=6.856259948 podStartE2EDuration="6.856259948s" podCreationTimestamp="2026-02-03 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:45.85173647 +0000 UTC m=+156.007712609" watchObservedRunningTime="2026-02-03 10:04:45.856259948 +0000 UTC m=+156.012236087" Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.864060 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" event={"ID":"433ae711-459e-4627-83c1-0fecfe929c60","Type":"ContainerStarted","Data":"f93b801ce0dca0c469ff09982034831af93b63cf34d684cee9bfe492088d1762"} Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.873834 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" event={"ID":"ec11c4de-b7ae-4b50-ab95-20be670ab6e8","Type":"ContainerStarted","Data":"b5f05f52af61cfe26c6aab58a8b996a878767b581abf72040ffaed251a9971df"} Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.873898 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" event={"ID":"ec11c4de-b7ae-4b50-ab95-20be670ab6e8","Type":"ContainerStarted","Data":"293f758cfef2035f88ec5bc09cf396c21e4fa0ec8021ab65013f44898c950667"} Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.896647 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" event={"ID":"b693a4b6-8aa6-489e-a797-fa486eab7443","Type":"ContainerStarted","Data":"294d969e011425258ad251779e47b0c179a8f9497cdd382eaf2ca07a38e507c1"} Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.897646 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.915147 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" event={"ID":"cde7673b-c4b1-4060-86cd-cac7120de9bf","Type":"ContainerStarted","Data":"7fa36036bb193f801098485fb02f2ce8c3dab3f18e9cdc63b4415c5e8ec9f25d"} Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.946969 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4pcf" event={"ID":"0c3f3f4e-122f-40b8-a3f1-d868a36640a1","Type":"ContainerStarted","Data":"3fd62a71078b1cb43038650754e215e13cac68a8cf4058f3c96fc00b7c1254e4"} Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.947014 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4pcf" event={"ID":"0c3f3f4e-122f-40b8-a3f1-d868a36640a1","Type":"ContainerStarted","Data":"c86b1c135cccd6d567565de23857d5e3ace4b68f753e7f63499d354b07f9ee1a"} Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.947705 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:45 crc kubenswrapper[5010]: E0203 10:04:45.949279 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:46.449266462 +0000 UTC m=+156.605242591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.963949 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" podStartSLOduration=130.963932379 podStartE2EDuration="2m10.963932379s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:45.962987403 +0000 UTC m=+156.118963532" watchObservedRunningTime="2026-02-03 10:04:45.963932379 +0000 UTC m=+156.119908508" Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.964286 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" podStartSLOduration=130.964282139 podStartE2EDuration="2m10.964282139s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:45.91296212 +0000 UTC m=+156.068938249" watchObservedRunningTime="2026-02-03 10:04:45.964282139 +0000 UTC m=+156.120258258" Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.995890 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" event={"ID":"9cddf065-d958-4bf4-b5a8-67321cba2f67","Type":"ContainerStarted","Data":"5be02c0cccee5cc5627eacb85d0058e31cee57c79bec998d1cc510ca71f853da"} Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.995946 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" event={"ID":"9cddf065-d958-4bf4-b5a8-67321cba2f67","Type":"ContainerStarted","Data":"581875668ab6c17ac7b5b9be84de72b09eb74b7d738bbea9f96cccaeb2f81662"} Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.997306 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.022842 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" event={"ID":"98d0bd22-70a8-4496-9074-3251c15e5b59","Type":"ContainerStarted","Data":"e81a392a89a2b15b43b8d2297fe5d7d2ca7f9ba8526d5464b4476a06ec368f96"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.029884 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" podStartSLOduration=131.029864244 podStartE2EDuration="2m11.029864244s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:45.999407498 +0000 UTC m=+156.155383647" watchObservedRunningTime="2026-02-03 10:04:46.029864244 +0000 UTC m=+156.185840383" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.030805 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" podStartSLOduration=131.03079908 podStartE2EDuration="2m11.03079908s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:46.027813526 +0000 UTC m=+156.183789665" watchObservedRunningTime="2026-02-03 10:04:46.03079908 +0000 UTC m=+156.186775220" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.031313 5010 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-65mrf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.031352 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" podUID="9cddf065-d958-4bf4-b5a8-67321cba2f67" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.044782 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" event={"ID":"f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d","Type":"ContainerStarted","Data":"940ad3dd3fbb496db0baca4fe005c88ba3a8b5856d186e50e2353dc0d2659e9d"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.053986 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:46 crc kubenswrapper[5010]: E0203 10:04:46.055164 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:46.555144693 +0000 UTC m=+156.711120822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.067736 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" event={"ID":"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3","Type":"ContainerStarted","Data":"f12d5d4b66060063a6e5fbeb3be26c884e2d80745ae9248253b4aa0557708464"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.067797 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" event={"ID":"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3","Type":"ContainerStarted","Data":"91eda359acc35811911d91d736b4f5dfa8bc7017b4342b54f6f3969cb4b1a75b"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.067813 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" event={"ID":"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3","Type":"ContainerStarted","Data":"046e9483d7e5ef6675a2efa85fd05ebbbe9383866d5c894cf4ec997b25f9780a"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.071435 5010 generic.go:334] "Generic (PLEG): container finished" podID="51fcb019-af4d-4f3d-b1b0-4b4e6761db7c" containerID="44762100bff179e19e68fc7183f3f9b331a0d53e199426f84db2785f934fb945" exitCode=0 Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.071536 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" event={"ID":"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c","Type":"ContainerDied","Data":"44762100bff179e19e68fc7183f3f9b331a0d53e199426f84db2785f934fb945"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.087817 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" podStartSLOduration=131.087791351 podStartE2EDuration="2m11.087791351s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:46.053691421 +0000 UTC m=+156.209667550" watchObservedRunningTime="2026-02-03 10:04:46.087791351 +0000 UTC m=+156.243767490" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.104383 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" event={"ID":"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899","Type":"ContainerStarted","Data":"3981a613e4c2fbdad5b4c2bb31b2a507dd0406e8f86fd4c60620c9e72f9533d8"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.112778 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" event={"ID":"2f2ac3f6-ed20-4205-9dfd-ce6d76269c26","Type":"ContainerStarted","Data":"2c0df893671116c4308e9f2a19b12ebca23d86f810b769a32c2ae536ba86f83f"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.119144 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" podStartSLOduration=131.119126322 podStartE2EDuration="2m11.119126322s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:46.089891141 +0000 UTC m=+156.245867290" watchObservedRunningTime="2026-02-03 10:04:46.119126322 +0000 UTC m=+156.275102451" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.141034 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" podStartSLOduration=131.141017994 podStartE2EDuration="2m11.141017994s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:46.138497052 +0000 UTC m=+156.294473181" watchObservedRunningTime="2026-02-03 10:04:46.141017994 +0000 UTC m=+156.296994113" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.155368 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" event={"ID":"d882e1bb-7ece-45ea-9e5e-0d23f162f06e","Type":"ContainerStarted","Data":"38f925966a34278b067557de50e9c41e692b377ad6073e9c8f649efcd66ae491"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.155411 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" event={"ID":"d882e1bb-7ece-45ea-9e5e-0d23f162f06e","Type":"ContainerStarted","Data":"4c9eef1ef6b1b398b5b6d439972963b0ced43649ecec034e904e5f1abffb1f27"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.160690 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:46 crc kubenswrapper[5010]: E0203 10:04:46.163104 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:46.663091632 +0000 UTC m=+156.819067761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.180058 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m4jjq" event={"ID":"a3d78816-3c67-4a17-8951-b605e971aa3b","Type":"ContainerStarted","Data":"79cdc3dc2ea16554708cddd4eb7c71a2fb3c85e6241fe9da5c9c1f0b122574b9"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.182400 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m4jjq" event={"ID":"a3d78816-3c67-4a17-8951-b605e971aa3b","Type":"ContainerStarted","Data":"df5c052b0ca9ef3d931104c92a268f401887f2d5870fe9cfa48661f36fa33c30"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.184394 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" podStartSLOduration=130.184376987 podStartE2EDuration="2m10.184376987s" podCreationTimestamp="2026-02-03 10:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:46.182788052 +0000 UTC m=+156.338764181" watchObservedRunningTime="2026-02-03 10:04:46.184376987 +0000 UTC m=+156.340353116" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.204036 5010 patch_prober.go:28] interesting pod/router-default-5444994796-whpdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 10:04:46 crc kubenswrapper[5010]: [-]has-synced failed: reason withheld Feb 03 10:04:46 crc kubenswrapper[5010]: [+]process-running ok Feb 03 10:04:46 crc kubenswrapper[5010]: healthz check failed Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.204094 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whpdl" podUID="58ae0ba7-4454-4bec-87ac-432b346ee643" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.204954 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" event={"ID":"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd","Type":"ContainerStarted","Data":"f6bc7008aed2e1cc27b0e5157e43328f2571d68a98845128c53dc3f2ef0a9cab"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.256626 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" event={"ID":"4ddcb32c-fe4a-4f24-bc77-d6bc56562d75","Type":"ContainerStarted","Data":"c3eea924367a5036aaeefe59a51974e32e0154c319bec0b602fa06f78f2e5fb8"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.261882 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:46 crc kubenswrapper[5010]: E0203 10:04:46.262264 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:46.76223473 +0000 UTC m=+156.918210859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.265910 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" event={"ID":"1b5592be-8839-4660-a4c4-ab662fc975eb","Type":"ContainerStarted","Data":"a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.265954 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" event={"ID":"1b5592be-8839-4660-a4c4-ab662fc975eb","Type":"ContainerStarted","Data":"2ade3cdf2529ce4152b52a6e4a45299bf6c1e2325f1341f2c73a3d85ad1e71e8"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.266491 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.275657 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-x7hq6" event={"ID":"c07afc79-e943-4e79-93ed-8eedd0ade1bc","Type":"ContainerStarted","Data":"7d36f05199bd4236b19601f8bb5bb2c733a5becdb85190b74621819ca44ec567"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.282987 5010 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6kg4f container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.283053 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" podUID="1b5592be-8839-4660-a4c4-ab662fc975eb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.292862 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" podStartSLOduration=131.292842531 podStartE2EDuration="2m11.292842531s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:46.291686378 +0000 UTC m=+156.447662507" watchObservedRunningTime="2026-02-03 10:04:46.292842531 +0000 UTC m=+156.448818660" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.296938 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" event={"ID":"6e12e505-3d35-4b3e-8015-9e2341d4791e","Type":"ContainerStarted","Data":"a35aa10695f71d166b4c7d6d25f3126748c3f6a60fcfdf34ea00ea0d114b01ab"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.296990 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" event={"ID":"6e12e505-3d35-4b3e-8015-9e2341d4791e","Type":"ContainerStarted","Data":"69be82670c15aaf4ca975cbfb52e590fc00dad12980abd48ab36b0ab7886dccf"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.308833 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" event={"ID":"effb39d8-ef30-45f3-bf93-b9dbb8de2475","Type":"ContainerStarted","Data":"ca3afc0cb6bc25e4c74c1c85c6d68d2d62b975ad765679a0d7c02e5221220d70"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.316561 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" podStartSLOduration=131.316544975 podStartE2EDuration="2m11.316544975s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:46.315985319 +0000 UTC m=+156.471961448" watchObservedRunningTime="2026-02-03 10:04:46.316544975 +0000 UTC m=+156.472521104" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.327092 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" event={"ID":"df4fd08a-dcc8-4d5c-95ad-9a3542df3233","Type":"ContainerStarted","Data":"022f5568881c0ea59ebfda6fd0b3b4d0681587700cd7b14ffdc63e70cb157b46"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.327136 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.327147 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" event={"ID":"df4fd08a-dcc8-4d5c-95ad-9a3542df3233","Type":"ContainerStarted","Data":"8a28e2edc657a5048c58ba3b8cd63019dd256e0941b8bb0d428cde6696ecbb40"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.329711 5010 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvtp4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.329766 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvtp4" podUID="d8101cd0-5430-4786-bf8a-3d9c60ad1f7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.336557 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.337173 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.337250 5010 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-sgfk5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.337309 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" podUID="df4fd08a-dcc8-4d5c-95ad-9a3542df3233" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.344084 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" podStartSLOduration=131.344061137 podStartE2EDuration="2m11.344061137s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:46.341391841 +0000 UTC m=+156.497367990" watchObservedRunningTime="2026-02-03 10:04:46.344061137 +0000 UTC m=+156.500037266" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.364139 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:46 crc kubenswrapper[5010]: E0203 10:04:46.365555 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:46.865540688 +0000 UTC m=+157.021516817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.391683 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.391743 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.399952 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" podStartSLOduration=131.399937595 podStartE2EDuration="2m11.399937595s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:46.398066482 +0000 UTC m=+156.554042621" watchObservedRunningTime="2026-02-03 10:04:46.399937595 +0000 UTC m=+156.555913724" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.465183 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:46 crc kubenswrapper[5010]: E0203 10:04:46.466552 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:46.966538489 +0000 UTC m=+157.122514618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.548537 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.572058 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:46 crc kubenswrapper[5010]: E0203 10:04:46.572479 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:47.072463331 +0000 UTC m=+157.228439460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.673535 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:46 crc kubenswrapper[5010]: E0203 10:04:46.673876 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:47.173843353 +0000 UTC m=+157.329819482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.674043 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:46 crc kubenswrapper[5010]: E0203 10:04:46.674453 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:47.1744399 +0000 UTC m=+157.330416029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.774892 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:46 crc kubenswrapper[5010]: E0203 10:04:46.775467 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:47.275450682 +0000 UTC m=+157.431426811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.876949 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:46 crc kubenswrapper[5010]: E0203 10:04:46.877325 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:47.377310708 +0000 UTC m=+157.533286837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.898366 5010 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5v56r container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.898439 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" podUID="b693a4b6-8aa6-489e-a797-fa486eab7443" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.977567 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:46 crc kubenswrapper[5010]: E0203 10:04:46.977843 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:47.477828556 +0000 UTC m=+157.633804685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.079183 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:47 crc kubenswrapper[5010]: E0203 10:04:47.079555 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:47.579536457 +0000 UTC m=+157.735512576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.180521 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:47 crc kubenswrapper[5010]: E0203 10:04:47.180830 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:47.680811317 +0000 UTC m=+157.836787446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.202895 5010 patch_prober.go:28] interesting pod/router-default-5444994796-whpdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 10:04:47 crc kubenswrapper[5010]: [-]has-synced failed: reason withheld Feb 03 10:04:47 crc kubenswrapper[5010]: [+]process-running ok Feb 03 10:04:47 crc kubenswrapper[5010]: healthz check failed Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.202954 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whpdl" podUID="58ae0ba7-4454-4bec-87ac-432b346ee643" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.282311 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:47 crc kubenswrapper[5010]: E0203 10:04:47.282621 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:47.782609331 +0000 UTC m=+157.938585460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.332762 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" event={"ID":"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c","Type":"ContainerStarted","Data":"3d5b0314dbf5f7aa34902e2f182fbe043e436b36cb6ed7a9cef2d51c643e7586"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.333022 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.336019 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" event={"ID":"effb39d8-ef30-45f3-bf93-b9dbb8de2475","Type":"ContainerStarted","Data":"59d2e6be15ba379b92c54da78afd9e360e0303aeed041e8422f8591b6facc1d5"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.347039 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4pcf" event={"ID":"0c3f3f4e-122f-40b8-a3f1-d868a36640a1","Type":"ContainerStarted","Data":"23c69c382437672d737ee1c9253d3b649d64f02967342da4e53a5491d6d11f41"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.348808 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" event={"ID":"98d0bd22-70a8-4496-9074-3251c15e5b59","Type":"ContainerStarted","Data":"cdf675738afd3e9673d7c8a3c2913d7c4bc0acfe2b768177dba47b892ee26961"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.350846 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" event={"ID":"f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d","Type":"ContainerStarted","Data":"d08219555fdd6f860a0a0c79c84a54c4b3e8a908b3af087bc85c670dc0d42cca"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.352417 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" event={"ID":"4ddcb32c-fe4a-4f24-bc77-d6bc56562d75","Type":"ContainerStarted","Data":"383a0977c6395c435f6bb1299748991ed3b67014a76c643016fe6b5a4e816b5f"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.352517 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" event={"ID":"4ddcb32c-fe4a-4f24-bc77-d6bc56562d75","Type":"ContainerStarted","Data":"e5913507e44fa6d528e47da0f0114d9206e0ec497acfe02ae985ddd84c0403e9"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.352926 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.354529 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-x7hq6" event={"ID":"c07afc79-e943-4e79-93ed-8eedd0ade1bc","Type":"ContainerStarted","Data":"f0df05e572c326ea9e0d57460e80d77f10d1a3c2b4d4095e934f18b8ec8a413b"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.354668 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-x7hq6" event={"ID":"c07afc79-e943-4e79-93ed-8eedd0ade1bc","Type":"ContainerStarted","Data":"69cfa19a166eae9cd879b7005d145b980c74b652e62753b8299ac40f360fdf1c"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.355851 5010 generic.go:334] "Generic (PLEG): container finished" podID="433ae711-459e-4627-83c1-0fecfe929c60" containerID="bf00e2dc0609d8f8edc0d28df9931c5f0a4f06db5d7656d44ecf648458c7ddb9" exitCode=0 Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.355964 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" event={"ID":"433ae711-459e-4627-83c1-0fecfe929c60","Type":"ContainerDied","Data":"bf00e2dc0609d8f8edc0d28df9931c5f0a4f06db5d7656d44ecf648458c7ddb9"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.358448 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" event={"ID":"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899","Type":"ContainerStarted","Data":"ea0da8ac601491fc423c1f3ea9db2da711074561434d518de7c75c9e854318a2"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.365483 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m4jjq" event={"ID":"a3d78816-3c67-4a17-8951-b605e971aa3b","Type":"ContainerStarted","Data":"a91d03d2e43775e69573400698a0acd2d175d55e99844b3b5eafec60117cd010"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.365831 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.368363 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" event={"ID":"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd","Type":"ContainerStarted","Data":"2bc4721c936d5b0596015432afe46b59d8f2e781c92a4deae0330e775de3eb67"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.368476 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" event={"ID":"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd","Type":"ContainerStarted","Data":"07d6436cf7500596fc6c1d939b7bc2ce20fb17332064138acadb1954b3034551"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.373125 5010 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6kg4f container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.373188 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" podUID="1b5592be-8839-4660-a4c4-ab662fc975eb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.379134 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.385946 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:47 crc kubenswrapper[5010]: E0203 10:04:47.386232 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:47.886198946 +0000 UTC m=+158.042175075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.386661 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:47 crc kubenswrapper[5010]: E0203 10:04:47.387109 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:47.887101172 +0000 UTC m=+158.043077301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.391504 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.400512 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" podStartSLOduration=132.400494472 podStartE2EDuration="2m12.400494472s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:47.35996295 +0000 UTC m=+157.515939079" watchObservedRunningTime="2026-02-03 10:04:47.400494472 +0000 UTC m=+157.556470601" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.401786 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" podStartSLOduration=132.401768149 podStartE2EDuration="2m12.401768149s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:47.398601579 +0000 UTC m=+157.554577708" watchObservedRunningTime="2026-02-03 10:04:47.401768149 +0000 UTC m=+157.557744278" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.403026 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.423575 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4pcf" podStartSLOduration=132.423558268 podStartE2EDuration="2m12.423558268s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:47.423353792 +0000 UTC m=+157.579329921" watchObservedRunningTime="2026-02-03 10:04:47.423558268 +0000 UTC m=+157.579534397" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.442016 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-x7hq6" podStartSLOduration=132.442001372 podStartE2EDuration="2m12.442001372s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:47.440692005 +0000 UTC m=+157.596668134" watchObservedRunningTime="2026-02-03 10:04:47.442001372 +0000 UTC m=+157.597977501" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.488321 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:47 crc kubenswrapper[5010]: E0203 10:04:47.489868 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:47.989848313 +0000 UTC m=+158.145824442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.501951 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" podStartSLOduration=132.501938647 podStartE2EDuration="2m12.501938647s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:47.501491324 +0000 UTC m=+157.657467463" watchObservedRunningTime="2026-02-03 10:04:47.501938647 +0000 UTC m=+157.657914766" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.540313 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-m4jjq" podStartSLOduration=8.540299197 podStartE2EDuration="8.540299197s" podCreationTimestamp="2026-02-03 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:47.538729493 +0000 UTC m=+157.694705622" watchObservedRunningTime="2026-02-03 10:04:47.540299197 +0000 UTC m=+157.696275326" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.599185 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:47 crc kubenswrapper[5010]: E0203 10:04:47.599626 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:48.099611552 +0000 UTC m=+158.255587681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.621018 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" podStartSLOduration=132.6209952 podStartE2EDuration="2m12.6209952s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:47.579192153 +0000 UTC m=+157.735168282" watchObservedRunningTime="2026-02-03 10:04:47.6209952 +0000 UTC m=+157.776971339" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.699994 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:47 crc kubenswrapper[5010]: E0203 10:04:47.700372 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:48.200340316 +0000 UTC m=+158.356316445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.801758 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:47 crc kubenswrapper[5010]: E0203 10:04:47.802099 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:48.302087979 +0000 UTC m=+158.458064108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.902818 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:47 crc kubenswrapper[5010]: E0203 10:04:47.903438 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:48.40342321 +0000 UTC m=+158.559399339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.005281 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:48 crc kubenswrapper[5010]: E0203 10:04:48.005688 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:48.505660227 +0000 UTC m=+158.661636406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.106890 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:48 crc kubenswrapper[5010]: E0203 10:04:48.107120 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:48.60707891 +0000 UTC m=+158.763055039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.107230 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:48 crc kubenswrapper[5010]: E0203 10:04:48.107501 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:48.607489692 +0000 UTC m=+158.763465811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.208271 5010 patch_prober.go:28] interesting pod/router-default-5444994796-whpdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 10:04:48 crc kubenswrapper[5010]: [-]has-synced failed: reason withheld Feb 03 10:04:48 crc kubenswrapper[5010]: [+]process-running ok Feb 03 10:04:48 crc kubenswrapper[5010]: healthz check failed Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.208332 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whpdl" podUID="58ae0ba7-4454-4bec-87ac-432b346ee643" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.208854 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:48 crc kubenswrapper[5010]: E0203 10:04:48.209327 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:48.709307397 +0000 UTC m=+158.865283526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.209420 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:48 crc kubenswrapper[5010]: E0203 10:04:48.209809 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:48.709795811 +0000 UTC m=+158.865771940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.307739 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f8ldc"] Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.309228 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.309985 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:48 crc kubenswrapper[5010]: E0203 10:04:48.310188 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:48.810167834 +0000 UTC m=+158.966143963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.310420 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:48 crc kubenswrapper[5010]: E0203 10:04:48.310670 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:48.810663868 +0000 UTC m=+158.966639987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.312093 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.330892 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f8ldc"] Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.374667 5010 generic.go:334] "Generic (PLEG): container finished" podID="9b9c4aab-790c-4581-bfc2-ad1d7302c704" containerID="15e10260ef913b6b44e27ef0b7816cd144403f167a0779e8880ec7a69901a07c" exitCode=0 Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.374730 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" event={"ID":"9b9c4aab-790c-4581-bfc2-ad1d7302c704","Type":"ContainerDied","Data":"15e10260ef913b6b44e27ef0b7816cd144403f167a0779e8880ec7a69901a07c"} Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.383406 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" event={"ID":"433ae711-459e-4627-83c1-0fecfe929c60","Type":"ContainerStarted","Data":"e1fad7219fde604ee1964cc2b115acc62f018b650d6a77feec226a4b418a2a60"} Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.384756 5010 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.399530 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" event={"ID":"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899","Type":"ContainerStarted","Data":"2f06a939b0376260061f39514a9ddf81f12b6b0eba4c4244aad7cf2ba24e07a8"} Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.399571 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" event={"ID":"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899","Type":"ContainerStarted","Data":"cbd1c570c173ba7f69c1dd2787e702a3eaf115b6cfb85078992b81d0837d78ea"} Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.411653 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.411981 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjvqs\" (UniqueName: \"kubernetes.io/projected/5a09b802-00fe-4ff8-983e-58c495061478-kube-api-access-vjvqs\") pod \"community-operators-f8ldc\" (UID: \"5a09b802-00fe-4ff8-983e-58c495061478\") " pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.412033 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a09b802-00fe-4ff8-983e-58c495061478-utilities\") pod \"community-operators-f8ldc\" (UID: \"5a09b802-00fe-4ff8-983e-58c495061478\") " pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.412089 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a09b802-00fe-4ff8-983e-58c495061478-catalog-content\") pod \"community-operators-f8ldc\" (UID: \"5a09b802-00fe-4ff8-983e-58c495061478\") " pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:04:48 crc kubenswrapper[5010]: E0203 10:04:48.412226 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:48.912194165 +0000 UTC m=+159.068170294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.423531 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" podStartSLOduration=133.423511167 podStartE2EDuration="2m13.423511167s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:48.419208574 +0000 UTC m=+158.575184713" watchObservedRunningTime="2026-02-03 10:04:48.423511167 +0000 UTC m=+158.579487296" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.511518 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rhsmk"] Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.512450 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.513249 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a09b802-00fe-4ff8-983e-58c495061478-catalog-content\") pod \"community-operators-f8ldc\" (UID: \"5a09b802-00fe-4ff8-983e-58c495061478\") " pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.513392 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.513770 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjvqs\" (UniqueName: \"kubernetes.io/projected/5a09b802-00fe-4ff8-983e-58c495061478-kube-api-access-vjvqs\") pod \"community-operators-f8ldc\" (UID: \"5a09b802-00fe-4ff8-983e-58c495061478\") " pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.513999 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a09b802-00fe-4ff8-983e-58c495061478-utilities\") pod \"community-operators-f8ldc\" (UID: \"5a09b802-00fe-4ff8-983e-58c495061478\") " pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.514305 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.515595 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a09b802-00fe-4ff8-983e-58c495061478-catalog-content\") pod \"community-operators-f8ldc\" (UID: \"5a09b802-00fe-4ff8-983e-58c495061478\") " pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.516423 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a09b802-00fe-4ff8-983e-58c495061478-utilities\") pod \"community-operators-f8ldc\" (UID: \"5a09b802-00fe-4ff8-983e-58c495061478\") " pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:04:48 crc kubenswrapper[5010]: E0203 10:04:48.517716 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:49.017703935 +0000 UTC m=+159.173680064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.527629 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rhsmk"] Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.558324 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjvqs\" (UniqueName: \"kubernetes.io/projected/5a09b802-00fe-4ff8-983e-58c495061478-kube-api-access-vjvqs\") pod \"community-operators-f8ldc\" (UID: \"5a09b802-00fe-4ff8-983e-58c495061478\") " pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.615361 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.615711 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b321403-09c3-4199-98ce-474deeea9d18-catalog-content\") pod \"certified-operators-rhsmk\" (UID: \"6b321403-09c3-4199-98ce-474deeea9d18\") " pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.615813 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rkwl\" (UniqueName: \"kubernetes.io/projected/6b321403-09c3-4199-98ce-474deeea9d18-kube-api-access-8rkwl\") pod \"certified-operators-rhsmk\" (UID: \"6b321403-09c3-4199-98ce-474deeea9d18\") " pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.615904 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b321403-09c3-4199-98ce-474deeea9d18-utilities\") pod \"certified-operators-rhsmk\" (UID: \"6b321403-09c3-4199-98ce-474deeea9d18\") " pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:04:48 crc kubenswrapper[5010]: E0203 10:04:48.616125 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:49.116109722 +0000 UTC m=+159.272085851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.628633 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.698378 5010 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-03T10:04:48.384789856Z","Handler":null,"Name":""} Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.710747 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9nhlj"] Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.712231 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.717495 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b321403-09c3-4199-98ce-474deeea9d18-catalog-content\") pod \"certified-operators-rhsmk\" (UID: \"6b321403-09c3-4199-98ce-474deeea9d18\") " pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.717763 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rkwl\" (UniqueName: \"kubernetes.io/projected/6b321403-09c3-4199-98ce-474deeea9d18-kube-api-access-8rkwl\") pod \"certified-operators-rhsmk\" (UID: \"6b321403-09c3-4199-98ce-474deeea9d18\") " pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.717867 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b321403-09c3-4199-98ce-474deeea9d18-utilities\") pod \"certified-operators-rhsmk\" (UID: \"6b321403-09c3-4199-98ce-474deeea9d18\") " pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.718000 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b321403-09c3-4199-98ce-474deeea9d18-catalog-content\") pod \"certified-operators-rhsmk\" (UID: \"6b321403-09c3-4199-98ce-474deeea9d18\") " pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.718104 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.718250 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b321403-09c3-4199-98ce-474deeea9d18-utilities\") pod \"certified-operators-rhsmk\" (UID: \"6b321403-09c3-4199-98ce-474deeea9d18\") " pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:04:48 crc kubenswrapper[5010]: E0203 10:04:48.718393 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:49.21837892 +0000 UTC m=+159.374355049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.727131 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9nhlj"] Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.779117 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rkwl\" (UniqueName: \"kubernetes.io/projected/6b321403-09c3-4199-98ce-474deeea9d18-kube-api-access-8rkwl\") pod \"certified-operators-rhsmk\" (UID: \"6b321403-09c3-4199-98ce-474deeea9d18\") " pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.798638 5010 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.798684 5010 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.820754 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.820930 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2wnb\" (UniqueName: \"kubernetes.io/projected/e7d7a138-50ca-4706-b760-2fe5154b2796-kube-api-access-d2wnb\") pod \"community-operators-9nhlj\" (UID: \"e7d7a138-50ca-4706-b760-2fe5154b2796\") " pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.821038 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d7a138-50ca-4706-b760-2fe5154b2796-catalog-content\") pod \"community-operators-9nhlj\" (UID: \"e7d7a138-50ca-4706-b760-2fe5154b2796\") " pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.821062 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d7a138-50ca-4706-b760-2fe5154b2796-utilities\") pod \"community-operators-9nhlj\" (UID: \"e7d7a138-50ca-4706-b760-2fe5154b2796\") " pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.834489 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.851352 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.897365 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dgktg"] Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.902161 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.911638 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dgktg"] Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.921877 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.921955 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d7a138-50ca-4706-b760-2fe5154b2796-catalog-content\") pod \"community-operators-9nhlj\" (UID: \"e7d7a138-50ca-4706-b760-2fe5154b2796\") " pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.921982 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d7a138-50ca-4706-b760-2fe5154b2796-utilities\") pod \"community-operators-9nhlj\" (UID: \"e7d7a138-50ca-4706-b760-2fe5154b2796\") " pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.922025 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2wnb\" (UniqueName: \"kubernetes.io/projected/e7d7a138-50ca-4706-b760-2fe5154b2796-kube-api-access-d2wnb\") pod \"community-operators-9nhlj\" (UID: \"e7d7a138-50ca-4706-b760-2fe5154b2796\") " pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.922773 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d7a138-50ca-4706-b760-2fe5154b2796-catalog-content\") pod \"community-operators-9nhlj\" (UID: \"e7d7a138-50ca-4706-b760-2fe5154b2796\") " pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.923062 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d7a138-50ca-4706-b760-2fe5154b2796-utilities\") pod \"community-operators-9nhlj\" (UID: \"e7d7a138-50ca-4706-b760-2fe5154b2796\") " pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.944063 5010 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.944107 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.944408 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f8ldc"] Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.949114 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2wnb\" (UniqueName: \"kubernetes.io/projected/e7d7a138-50ca-4706-b760-2fe5154b2796-kube-api-access-d2wnb\") pod \"community-operators-9nhlj\" (UID: \"e7d7a138-50ca-4706-b760-2fe5154b2796\") " pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.996476 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.022899 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16b28bac-b8da-4fa7-8282-3b97ef4decac-utilities\") pod \"certified-operators-dgktg\" (UID: \"16b28bac-b8da-4fa7-8282-3b97ef4decac\") " pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.022967 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmkxt\" (UniqueName: \"kubernetes.io/projected/16b28bac-b8da-4fa7-8282-3b97ef4decac-kube-api-access-jmkxt\") pod \"certified-operators-dgktg\" (UID: \"16b28bac-b8da-4fa7-8282-3b97ef4decac\") " pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.023042 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16b28bac-b8da-4fa7-8282-3b97ef4decac-catalog-content\") pod \"certified-operators-dgktg\" (UID: \"16b28bac-b8da-4fa7-8282-3b97ef4decac\") " pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.027113 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.124753 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16b28bac-b8da-4fa7-8282-3b97ef4decac-utilities\") pod \"certified-operators-dgktg\" (UID: \"16b28bac-b8da-4fa7-8282-3b97ef4decac\") " pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.124840 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmkxt\" (UniqueName: \"kubernetes.io/projected/16b28bac-b8da-4fa7-8282-3b97ef4decac-kube-api-access-jmkxt\") pod \"certified-operators-dgktg\" (UID: \"16b28bac-b8da-4fa7-8282-3b97ef4decac\") " pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.124946 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16b28bac-b8da-4fa7-8282-3b97ef4decac-catalog-content\") pod \"certified-operators-dgktg\" (UID: \"16b28bac-b8da-4fa7-8282-3b97ef4decac\") " pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.125324 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16b28bac-b8da-4fa7-8282-3b97ef4decac-utilities\") pod \"certified-operators-dgktg\" (UID: \"16b28bac-b8da-4fa7-8282-3b97ef4decac\") " pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.125373 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16b28bac-b8da-4fa7-8282-3b97ef4decac-catalog-content\") pod \"certified-operators-dgktg\" (UID: \"16b28bac-b8da-4fa7-8282-3b97ef4decac\") " pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.146203 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rhsmk"] Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.149937 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmkxt\" (UniqueName: \"kubernetes.io/projected/16b28bac-b8da-4fa7-8282-3b97ef4decac-kube-api-access-jmkxt\") pod \"certified-operators-dgktg\" (UID: \"16b28bac-b8da-4fa7-8282-3b97ef4decac\") " pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:04:49 crc kubenswrapper[5010]: W0203 10:04:49.150728 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b321403_09c3_4199_98ce_474deeea9d18.slice/crio-63d8474bfb4a1a954341a0c6e3ac0ed4a51edc38981d0b3fd911b0c631516f52 WatchSource:0}: Error finding container 63d8474bfb4a1a954341a0c6e3ac0ed4a51edc38981d0b3fd911b0c631516f52: Status 404 returned error can't find the container with id 63d8474bfb4a1a954341a0c6e3ac0ed4a51edc38981d0b3fd911b0c631516f52 Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.203065 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.204039 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.207309 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.229406 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.258649 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9nhlj"] Feb 03 10:04:49 crc kubenswrapper[5010]: W0203 10:04:49.269976 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7d7a138_50ca_4706_b760_2fe5154b2796.slice/crio-1b0c23388be323142da658c9f60348ab9cd0cc51111e7de9f4e1bb46c8a6bc8a WatchSource:0}: Error finding container 1b0c23388be323142da658c9f60348ab9cd0cc51111e7de9f4e1bb46c8a6bc8a: Status 404 returned error can't find the container with id 1b0c23388be323142da658c9f60348ab9cd0cc51111e7de9f4e1bb46c8a6bc8a Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.270933 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.415879 5010 generic.go:334] "Generic (PLEG): container finished" podID="5a09b802-00fe-4ff8-983e-58c495061478" containerID="fb38973c90eca1b297983e38725d0efd4de1191c9f324379b771a27b35bf9908" exitCode=0 Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.416089 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8ldc" event={"ID":"5a09b802-00fe-4ff8-983e-58c495061478","Type":"ContainerDied","Data":"fb38973c90eca1b297983e38725d0efd4de1191c9f324379b771a27b35bf9908"} Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.416266 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8ldc" event={"ID":"5a09b802-00fe-4ff8-983e-58c495061478","Type":"ContainerStarted","Data":"9b3e23c6c17315ac65a0626a6f5dc6fcfc45753c23f65c38f8420f31fc344706"} Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.419831 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.423066 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" event={"ID":"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899","Type":"ContainerStarted","Data":"fc1be3f0c60688bf688144cf6e3149397c5618238d9ca0779dad8f429552e5d8"} Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.424854 5010 generic.go:334] "Generic (PLEG): container finished" podID="6b321403-09c3-4199-98ce-474deeea9d18" containerID="bcd8a889807bd25445dfb722549faf19cd01bc11e1f8fd1048942ecd1b7beb47" exitCode=0 Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.424917 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhsmk" event={"ID":"6b321403-09c3-4199-98ce-474deeea9d18","Type":"ContainerDied","Data":"bcd8a889807bd25445dfb722549faf19cd01bc11e1f8fd1048942ecd1b7beb47"} Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.424947 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhsmk" event={"ID":"6b321403-09c3-4199-98ce-474deeea9d18","Type":"ContainerStarted","Data":"63d8474bfb4a1a954341a0c6e3ac0ed4a51edc38981d0b3fd911b0c631516f52"} Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.427389 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nhlj" event={"ID":"e7d7a138-50ca-4706-b760-2fe5154b2796","Type":"ContainerStarted","Data":"1b0c23388be323142da658c9f60348ab9cd0cc51111e7de9f4e1bb46c8a6bc8a"} Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.488233 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" podStartSLOduration=10.488197197 podStartE2EDuration="10.488197197s" podCreationTimestamp="2026-02-03 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:49.46299746 +0000 UTC m=+159.618973599" watchObservedRunningTime="2026-02-03 10:04:49.488197197 +0000 UTC m=+159.644173326" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.541189 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dgktg"] Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.589446 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x857s"] Feb 03 10:04:49 crc kubenswrapper[5010]: W0203 10:04:49.594113 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod594e9304_c63f_4d73_bcad_5258c1ebdd6d.slice/crio-4d0c21608e47f2a5fbe71a063022d5430ee94df368929ef6f0cd30bef83d5cd9 WatchSource:0}: Error finding container 4d0c21608e47f2a5fbe71a063022d5430ee94df368929ef6f0cd30bef83d5cd9: Status 404 returned error can't find the container with id 4d0c21608e47f2a5fbe71a063022d5430ee94df368929ef6f0cd30bef83d5cd9 Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.765649 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.845148 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b9c4aab-790c-4581-bfc2-ad1d7302c704-secret-volume\") pod \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.845335 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfsz9\" (UniqueName: \"kubernetes.io/projected/9b9c4aab-790c-4581-bfc2-ad1d7302c704-kube-api-access-qfsz9\") pod \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.845364 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b9c4aab-790c-4581-bfc2-ad1d7302c704-config-volume\") pod \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.846071 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b9c4aab-790c-4581-bfc2-ad1d7302c704-config-volume" (OuterVolumeSpecName: "config-volume") pod "9b9c4aab-790c-4581-bfc2-ad1d7302c704" (UID: "9b9c4aab-790c-4581-bfc2-ad1d7302c704"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.851049 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b9c4aab-790c-4581-bfc2-ad1d7302c704-kube-api-access-qfsz9" (OuterVolumeSpecName: "kube-api-access-qfsz9") pod "9b9c4aab-790c-4581-bfc2-ad1d7302c704" (UID: "9b9c4aab-790c-4581-bfc2-ad1d7302c704"). InnerVolumeSpecName "kube-api-access-qfsz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.851245 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9c4aab-790c-4581-bfc2-ad1d7302c704-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9b9c4aab-790c-4581-bfc2-ad1d7302c704" (UID: "9b9c4aab-790c-4581-bfc2-ad1d7302c704"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.946916 5010 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b9c4aab-790c-4581-bfc2-ad1d7302c704-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.946954 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfsz9\" (UniqueName: \"kubernetes.io/projected/9b9c4aab-790c-4581-bfc2-ad1d7302c704-kube-api-access-qfsz9\") on node \"crc\" DevicePath \"\"" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.946964 5010 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b9c4aab-790c-4581-bfc2-ad1d7302c704-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.432050 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" event={"ID":"594e9304-c63f-4d73-bcad-5258c1ebdd6d","Type":"ContainerStarted","Data":"4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27"} Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.432096 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" event={"ID":"594e9304-c63f-4d73-bcad-5258c1ebdd6d","Type":"ContainerStarted","Data":"4d0c21608e47f2a5fbe71a063022d5430ee94df368929ef6f0cd30bef83d5cd9"} Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.432145 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.433385 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" event={"ID":"9b9c4aab-790c-4581-bfc2-ad1d7302c704","Type":"ContainerDied","Data":"68feaa08ed8d91769630ca032dc73a0d3797e1b08b8b7690cc25c9c07a16da2d"} Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.433426 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68feaa08ed8d91769630ca032dc73a0d3797e1b08b8b7690cc25c9c07a16da2d" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.433396 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.448809 5010 generic.go:334] "Generic (PLEG): container finished" podID="e7d7a138-50ca-4706-b760-2fe5154b2796" containerID="6c34e521910561d744489bcc04d63bb60f01ae814df1e11ab8b27bfb522f2dcf" exitCode=0 Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.448911 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nhlj" event={"ID":"e7d7a138-50ca-4706-b760-2fe5154b2796","Type":"ContainerDied","Data":"6c34e521910561d744489bcc04d63bb60f01ae814df1e11ab8b27bfb522f2dcf"} Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.450776 5010 generic.go:334] "Generic (PLEG): container finished" podID="16b28bac-b8da-4fa7-8282-3b97ef4decac" containerID="3a76abe4c5364f44f09a54270bc240290cf286a9884d39d2982b2da16ddcac0f" exitCode=0 Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.450871 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgktg" event={"ID":"16b28bac-b8da-4fa7-8282-3b97ef4decac","Type":"ContainerDied","Data":"3a76abe4c5364f44f09a54270bc240290cf286a9884d39d2982b2da16ddcac0f"} Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.450905 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgktg" event={"ID":"16b28bac-b8da-4fa7-8282-3b97ef4decac","Type":"ContainerStarted","Data":"f8067043c468ce02991a947f5558cbe6d87a64ec40b08e86c4e947e44eed14bc"} Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.480472 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" podStartSLOduration=135.480451988 podStartE2EDuration="2m15.480451988s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:50.468688633 +0000 UTC m=+160.624664762" watchObservedRunningTime="2026-02-03 10:04:50.480451988 +0000 UTC m=+160.636428117" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.516401 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.520956 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w967c"] Feb 03 10:04:50 crc kubenswrapper[5010]: E0203 10:04:50.521203 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9c4aab-790c-4581-bfc2-ad1d7302c704" containerName="collect-profiles" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.521235 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9c4aab-790c-4581-bfc2-ad1d7302c704" containerName="collect-profiles" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.521363 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b9c4aab-790c-4581-bfc2-ad1d7302c704" containerName="collect-profiles" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.522429 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.529669 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.552393 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w967c"] Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.658049 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw58w\" (UniqueName: \"kubernetes.io/projected/778b346c-f503-4364-9757-98c213d89edc-kube-api-access-mw58w\") pod \"redhat-marketplace-w967c\" (UID: \"778b346c-f503-4364-9757-98c213d89edc\") " pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.658118 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/778b346c-f503-4364-9757-98c213d89edc-utilities\") pod \"redhat-marketplace-w967c\" (UID: \"778b346c-f503-4364-9757-98c213d89edc\") " pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.658149 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/778b346c-f503-4364-9757-98c213d89edc-catalog-content\") pod \"redhat-marketplace-w967c\" (UID: \"778b346c-f503-4364-9757-98c213d89edc\") " pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.760693 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/778b346c-f503-4364-9757-98c213d89edc-catalog-content\") pod \"redhat-marketplace-w967c\" (UID: \"778b346c-f503-4364-9757-98c213d89edc\") " pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.760817 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw58w\" (UniqueName: \"kubernetes.io/projected/778b346c-f503-4364-9757-98c213d89edc-kube-api-access-mw58w\") pod \"redhat-marketplace-w967c\" (UID: \"778b346c-f503-4364-9757-98c213d89edc\") " pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.760867 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/778b346c-f503-4364-9757-98c213d89edc-utilities\") pod \"redhat-marketplace-w967c\" (UID: \"778b346c-f503-4364-9757-98c213d89edc\") " pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.761311 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/778b346c-f503-4364-9757-98c213d89edc-catalog-content\") pod \"redhat-marketplace-w967c\" (UID: \"778b346c-f503-4364-9757-98c213d89edc\") " pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.761345 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/778b346c-f503-4364-9757-98c213d89edc-utilities\") pod \"redhat-marketplace-w967c\" (UID: \"778b346c-f503-4364-9757-98c213d89edc\") " pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.802308 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw58w\" (UniqueName: \"kubernetes.io/projected/778b346c-f503-4364-9757-98c213d89edc-kube-api-access-mw58w\") pod \"redhat-marketplace-w967c\" (UID: \"778b346c-f503-4364-9757-98c213d89edc\") " pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.848992 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.900089 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rp7rd"] Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.901066 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.915682 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp7rd"] Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.070609 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f8db32-0c68-4c72-9aad-a02ce0c958aa-catalog-content\") pod \"redhat-marketplace-rp7rd\" (UID: \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\") " pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.070669 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgmtk\" (UniqueName: \"kubernetes.io/projected/49f8db32-0c68-4c72-9aad-a02ce0c958aa-kube-api-access-cgmtk\") pod \"redhat-marketplace-rp7rd\" (UID: \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\") " pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.070689 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f8db32-0c68-4c72-9aad-a02ce0c958aa-utilities\") pod \"redhat-marketplace-rp7rd\" (UID: \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\") " pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.172148 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f8db32-0c68-4c72-9aad-a02ce0c958aa-catalog-content\") pod \"redhat-marketplace-rp7rd\" (UID: \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\") " pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.172765 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgmtk\" (UniqueName: \"kubernetes.io/projected/49f8db32-0c68-4c72-9aad-a02ce0c958aa-kube-api-access-cgmtk\") pod \"redhat-marketplace-rp7rd\" (UID: \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\") " pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.172849 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f8db32-0c68-4c72-9aad-a02ce0c958aa-utilities\") pod \"redhat-marketplace-rp7rd\" (UID: \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\") " pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.172923 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f8db32-0c68-4c72-9aad-a02ce0c958aa-catalog-content\") pod \"redhat-marketplace-rp7rd\" (UID: \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\") " pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.175366 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f8db32-0c68-4c72-9aad-a02ce0c958aa-utilities\") pod \"redhat-marketplace-rp7rd\" (UID: \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\") " pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.196553 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgmtk\" (UniqueName: \"kubernetes.io/projected/49f8db32-0c68-4c72-9aad-a02ce0c958aa-kube-api-access-cgmtk\") pod \"redhat-marketplace-rp7rd\" (UID: \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\") " pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.236427 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w967c"] Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.253194 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:04:51 crc kubenswrapper[5010]: W0203 10:04:51.260143 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod778b346c_f503_4364_9757_98c213d89edc.slice/crio-ccc904854d56565749138df195a8c2b29f6946a5393227b9fe1b124f630fe4e6 WatchSource:0}: Error finding container ccc904854d56565749138df195a8c2b29f6946a5393227b9fe1b124f630fe4e6: Status 404 returned error can't find the container with id ccc904854d56565749138df195a8c2b29f6946a5393227b9fe1b124f630fe4e6 Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.297445 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.421403 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.422970 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.427610 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.427779 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.432543 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.463896 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w967c" event={"ID":"778b346c-f503-4364-9757-98c213d89edc","Type":"ContainerStarted","Data":"ccc904854d56565749138df195a8c2b29f6946a5393227b9fe1b124f630fe4e6"} Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.500329 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5pgxf"] Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.504976 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.511493 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.529184 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5pgxf"] Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.578544 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cbcc5a5-e7ab-4f45-932e-2a75b44a8918-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.578599 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777b0b1e-96c3-4914-8b7b-d51186433cb7-catalog-content\") pod \"redhat-operators-5pgxf\" (UID: \"777b0b1e-96c3-4914-8b7b-d51186433cb7\") " pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.578630 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cbcc5a5-e7ab-4f45-932e-2a75b44a8918-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.578697 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndvzg\" (UniqueName: \"kubernetes.io/projected/777b0b1e-96c3-4914-8b7b-d51186433cb7-kube-api-access-ndvzg\") pod \"redhat-operators-5pgxf\" (UID: \"777b0b1e-96c3-4914-8b7b-d51186433cb7\") " pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.578764 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777b0b1e-96c3-4914-8b7b-d51186433cb7-utilities\") pod \"redhat-operators-5pgxf\" (UID: \"777b0b1e-96c3-4914-8b7b-d51186433cb7\") " pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.580689 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.580717 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.594996 5010 patch_prober.go:28] interesting pod/console-f9d7485db-wtcpj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.595052 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-wtcpj" podUID="61f7221f-b9e1-45bc-8a9e-2f512c9e457d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.680420 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cbcc5a5-e7ab-4f45-932e-2a75b44a8918-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.680608 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndvzg\" (UniqueName: \"kubernetes.io/projected/777b0b1e-96c3-4914-8b7b-d51186433cb7-kube-api-access-ndvzg\") pod \"redhat-operators-5pgxf\" (UID: \"777b0b1e-96c3-4914-8b7b-d51186433cb7\") " pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.681592 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777b0b1e-96c3-4914-8b7b-d51186433cb7-utilities\") pod \"redhat-operators-5pgxf\" (UID: \"777b0b1e-96c3-4914-8b7b-d51186433cb7\") " pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.681664 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cbcc5a5-e7ab-4f45-932e-2a75b44a8918-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.681725 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777b0b1e-96c3-4914-8b7b-d51186433cb7-catalog-content\") pod \"redhat-operators-5pgxf\" (UID: \"777b0b1e-96c3-4914-8b7b-d51186433cb7\") " pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.682098 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cbcc5a5-e7ab-4f45-932e-2a75b44a8918-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.682183 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777b0b1e-96c3-4914-8b7b-d51186433cb7-catalog-content\") pod \"redhat-operators-5pgxf\" (UID: \"777b0b1e-96c3-4914-8b7b-d51186433cb7\") " pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.682556 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777b0b1e-96c3-4914-8b7b-d51186433cb7-utilities\") pod \"redhat-operators-5pgxf\" (UID: \"777b0b1e-96c3-4914-8b7b-d51186433cb7\") " pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.712424 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndvzg\" (UniqueName: \"kubernetes.io/projected/777b0b1e-96c3-4914-8b7b-d51186433cb7-kube-api-access-ndvzg\") pod \"redhat-operators-5pgxf\" (UID: \"777b0b1e-96c3-4914-8b7b-d51186433cb7\") " pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.721320 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cbcc5a5-e7ab-4f45-932e-2a75b44a8918-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.738552 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp7rd"] Feb 03 10:04:51 crc kubenswrapper[5010]: W0203 10:04:51.747121 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49f8db32_0c68_4c72_9aad_a02ce0c958aa.slice/crio-5fb8735def162698d86190ccce3a51a4ca66746325003df2b81d78c40f569048 WatchSource:0}: Error finding container 5fb8735def162698d86190ccce3a51a4ca66746325003df2b81d78c40f569048: Status 404 returned error can't find the container with id 5fb8735def162698d86190ccce3a51a4ca66746325003df2b81d78c40f569048 Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.773560 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.833614 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.917444 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vqqgt"] Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.919176 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.924645 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vqqgt"] Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.991735 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-utilities\") pod \"redhat-operators-vqqgt\" (UID: \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\") " pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.992277 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmj7d\" (UniqueName: \"kubernetes.io/projected/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-kube-api-access-kmj7d\") pod \"redhat-operators-vqqgt\" (UID: \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\") " pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.992357 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-catalog-content\") pod \"redhat-operators-vqqgt\" (UID: \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\") " pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.090786 5010 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvtp4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.090841 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jvtp4" podUID="d8101cd0-5430-4786-bf8a-3d9c60ad1f7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.091048 5010 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvtp4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.091094 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvtp4" podUID="d8101cd0-5430-4786-bf8a-3d9c60ad1f7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.100428 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-utilities\") pod \"redhat-operators-vqqgt\" (UID: \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\") " pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.100628 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmj7d\" (UniqueName: \"kubernetes.io/projected/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-kube-api-access-kmj7d\") pod \"redhat-operators-vqqgt\" (UID: \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\") " pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.100726 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-catalog-content\") pod \"redhat-operators-vqqgt\" (UID: \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\") " pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.101711 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-catalog-content\") pod \"redhat-operators-vqqgt\" (UID: \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\") " pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.102974 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-utilities\") pod \"redhat-operators-vqqgt\" (UID: \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\") " pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.137263 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmj7d\" (UniqueName: \"kubernetes.io/projected/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-kube-api-access-kmj7d\") pod \"redhat-operators-vqqgt\" (UID: \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\") " pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.202278 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 03 10:04:52 crc kubenswrapper[5010]: W0203 10:04:52.263368 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4cbcc5a5_e7ab_4f45_932e_2a75b44a8918.slice/crio-c707b98492191932d0175e25d4e25f2fb1048f7ce0a1e4416bc5a04063fd6c02 WatchSource:0}: Error finding container c707b98492191932d0175e25d4e25f2fb1048f7ce0a1e4416bc5a04063fd6c02: Status 404 returned error can't find the container with id c707b98492191932d0175e25d4e25f2fb1048f7ce0a1e4416bc5a04063fd6c02 Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.273890 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.325690 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.497588 5010 generic.go:334] "Generic (PLEG): container finished" podID="778b346c-f503-4364-9757-98c213d89edc" containerID="c81b301246f1acefeee01e3df5b61b48f31087c63825e8dbd41865fd47f36a39" exitCode=0 Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.497673 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w967c" event={"ID":"778b346c-f503-4364-9757-98c213d89edc","Type":"ContainerDied","Data":"c81b301246f1acefeee01e3df5b61b48f31087c63825e8dbd41865fd47f36a39"} Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.556554 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918","Type":"ContainerStarted","Data":"c707b98492191932d0175e25d4e25f2fb1048f7ce0a1e4416bc5a04063fd6c02"} Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.567882 5010 generic.go:334] "Generic (PLEG): container finished" podID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" containerID="e70831de14dc76fe2d8c698ee95b71e39567c1e454abec34c9a4a5c30f4aa8ee" exitCode=0 Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.567930 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp7rd" event={"ID":"49f8db32-0c68-4c72-9aad-a02ce0c958aa","Type":"ContainerDied","Data":"e70831de14dc76fe2d8c698ee95b71e39567c1e454abec34c9a4a5c30f4aa8ee"} Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.567955 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp7rd" event={"ID":"49f8db32-0c68-4c72-9aad-a02ce0c958aa","Type":"ContainerStarted","Data":"5fb8735def162698d86190ccce3a51a4ca66746325003df2b81d78c40f569048"} Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.590334 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.590393 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.608576 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.608611 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5pgxf"] Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.771475 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vqqgt"] Feb 03 10:04:52 crc kubenswrapper[5010]: W0203 10:04:52.797270 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcb492ad_594e_4460_8a8b_3476a4a0ddfe.slice/crio-b03e103076d38aa5bbbd68150acf3238a80f5aa11d029cd0429d26318865532f WatchSource:0}: Error finding container b03e103076d38aa5bbbd68150acf3238a80f5aa11d029cd0429d26318865532f: Status 404 returned error can't find the container with id b03e103076d38aa5bbbd68150acf3238a80f5aa11d029cd0429d26318865532f Feb 03 10:04:53 crc kubenswrapper[5010]: I0203 10:04:53.615651 5010 generic.go:334] "Generic (PLEG): container finished" podID="777b0b1e-96c3-4914-8b7b-d51186433cb7" containerID="fca3a0de046b6aa0bbd88f4d836f2482bd38d25ab3a9c5bce8610c44b5a5caf1" exitCode=0 Feb 03 10:04:53 crc kubenswrapper[5010]: I0203 10:04:53.615718 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pgxf" event={"ID":"777b0b1e-96c3-4914-8b7b-d51186433cb7","Type":"ContainerDied","Data":"fca3a0de046b6aa0bbd88f4d836f2482bd38d25ab3a9c5bce8610c44b5a5caf1"} Feb 03 10:04:53 crc kubenswrapper[5010]: I0203 10:04:53.616026 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pgxf" event={"ID":"777b0b1e-96c3-4914-8b7b-d51186433cb7","Type":"ContainerStarted","Data":"3ee4a0547eec3952db79e960939ddf437d022a2d426d7a0f64071f60145150ba"} Feb 03 10:04:53 crc kubenswrapper[5010]: I0203 10:04:53.645025 5010 generic.go:334] "Generic (PLEG): container finished" podID="4cbcc5a5-e7ab-4f45-932e-2a75b44a8918" containerID="b6ce30260b0537e23c72d3fbda2480ff591908c7f4893374556eb30d66802455" exitCode=0 Feb 03 10:04:53 crc kubenswrapper[5010]: I0203 10:04:53.645121 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918","Type":"ContainerDied","Data":"b6ce30260b0537e23c72d3fbda2480ff591908c7f4893374556eb30d66802455"} Feb 03 10:04:53 crc kubenswrapper[5010]: I0203 10:04:53.649365 5010 generic.go:334] "Generic (PLEG): container finished" podID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" containerID="e368cf1e860ceec201b26f8820d913ac5d90d18137dd55d145c59832181c9831" exitCode=0 Feb 03 10:04:53 crc kubenswrapper[5010]: I0203 10:04:53.649699 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vqqgt" event={"ID":"bcb492ad-594e-4460-8a8b-3476a4a0ddfe","Type":"ContainerDied","Data":"e368cf1e860ceec201b26f8820d913ac5d90d18137dd55d145c59832181c9831"} Feb 03 10:04:53 crc kubenswrapper[5010]: I0203 10:04:53.649761 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vqqgt" event={"ID":"bcb492ad-594e-4460-8a8b-3476a4a0ddfe","Type":"ContainerStarted","Data":"b03e103076d38aa5bbbd68150acf3238a80f5aa11d029cd0429d26318865532f"} Feb 03 10:04:53 crc kubenswrapper[5010]: I0203 10:04:53.665645 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.039415 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.118848 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.119866 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 03 10:04:55 crc kubenswrapper[5010]: E0203 10:04:55.120269 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cbcc5a5-e7ab-4f45-932e-2a75b44a8918" containerName="pruner" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.120287 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cbcc5a5-e7ab-4f45-932e-2a75b44a8918" containerName="pruner" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.120379 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cbcc5a5-e7ab-4f45-932e-2a75b44a8918" containerName="pruner" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.120931 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.123085 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.127056 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.129665 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.203039 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cbcc5a5-e7ab-4f45-932e-2a75b44a8918-kube-api-access\") pod \"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918\" (UID: \"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918\") " Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.203280 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cbcc5a5-e7ab-4f45-932e-2a75b44a8918-kubelet-dir\") pod \"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918\" (UID: \"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918\") " Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.203932 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cbcc5a5-e7ab-4f45-932e-2a75b44a8918-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4cbcc5a5-e7ab-4f45-932e-2a75b44a8918" (UID: "4cbcc5a5-e7ab-4f45-932e-2a75b44a8918"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.215882 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cbcc5a5-e7ab-4f45-932e-2a75b44a8918-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4cbcc5a5-e7ab-4f45-932e-2a75b44a8918" (UID: "4cbcc5a5-e7ab-4f45-932e-2a75b44a8918"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.312982 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5f00703-7e5f-4c7b-85fe-ce7fb07b7431-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.313168 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5f00703-7e5f-4c7b-85fe-ce7fb07b7431-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.313647 5010 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cbcc5a5-e7ab-4f45-932e-2a75b44a8918-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.313721 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cbcc5a5-e7ab-4f45-932e-2a75b44a8918-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.415453 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5f00703-7e5f-4c7b-85fe-ce7fb07b7431-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.415525 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5f00703-7e5f-4c7b-85fe-ce7fb07b7431-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.415620 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5f00703-7e5f-4c7b-85fe-ce7fb07b7431-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.454394 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5f00703-7e5f-4c7b-85fe-ce7fb07b7431-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.465224 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.715754 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918","Type":"ContainerDied","Data":"c707b98492191932d0175e25d4e25f2fb1048f7ce0a1e4416bc5a04063fd6c02"} Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.716080 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c707b98492191932d0175e25d4e25f2fb1048f7ce0a1e4416bc5a04063fd6c02" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.715827 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.870579 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 03 10:04:55 crc kubenswrapper[5010]: W0203 10:04:55.906631 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb5f00703_7e5f_4c7b_85fe_ce7fb07b7431.slice/crio-2204207b01823b33e27480642d42ad6ac24cd5512f2cf07c931779231850f28b WatchSource:0}: Error finding container 2204207b01823b33e27480642d42ad6ac24cd5512f2cf07c931779231850f28b: Status 404 returned error can't find the container with id 2204207b01823b33e27480642d42ad6ac24cd5512f2cf07c931779231850f28b Feb 03 10:04:56 crc kubenswrapper[5010]: I0203 10:04:56.730678 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431","Type":"ContainerStarted","Data":"2204207b01823b33e27480642d42ad6ac24cd5512f2cf07c931779231850f28b"} Feb 03 10:04:57 crc kubenswrapper[5010]: I0203 10:04:57.742301 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431","Type":"ContainerStarted","Data":"c9571ee18245dbd51cc88b9c5049e37b6b83a29ee3997cd7bbd419274e1211f3"} Feb 03 10:04:57 crc kubenswrapper[5010]: I0203 10:04:57.762904 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.7628874530000003 podStartE2EDuration="2.762887453s" podCreationTimestamp="2026-02-03 10:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:57.761409591 +0000 UTC m=+167.917385730" watchObservedRunningTime="2026-02-03 10:04:57.762887453 +0000 UTC m=+167.918863582" Feb 03 10:04:57 crc kubenswrapper[5010]: I0203 10:04:57.770359 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:57 crc kubenswrapper[5010]: I0203 10:04:57.776351 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:57 crc kubenswrapper[5010]: I0203 10:04:57.793571 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:58 crc kubenswrapper[5010]: I0203 10:04:58.017880 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:58 crc kubenswrapper[5010]: I0203 10:04:58.718847 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-clvdz"] Feb 03 10:04:58 crc kubenswrapper[5010]: I0203 10:04:58.753362 5010 generic.go:334] "Generic (PLEG): container finished" podID="b5f00703-7e5f-4c7b-85fe-ce7fb07b7431" containerID="c9571ee18245dbd51cc88b9c5049e37b6b83a29ee3997cd7bbd419274e1211f3" exitCode=0 Feb 03 10:04:58 crc kubenswrapper[5010]: I0203 10:04:58.753401 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431","Type":"ContainerDied","Data":"c9571ee18245dbd51cc88b9c5049e37b6b83a29ee3997cd7bbd419274e1211f3"} Feb 03 10:04:58 crc kubenswrapper[5010]: W0203 10:04:58.754895 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod081d0234_b506_49ff_81c9_c535f6e1c588.slice/crio-13f48e6ab387ab1d95442d03eb875ad51364e131f58502ed226acd326e53d72e WatchSource:0}: Error finding container 13f48e6ab387ab1d95442d03eb875ad51364e131f58502ed226acd326e53d72e: Status 404 returned error can't find the container with id 13f48e6ab387ab1d95442d03eb875ad51364e131f58502ed226acd326e53d72e Feb 03 10:04:59 crc kubenswrapper[5010]: I0203 10:04:59.764133 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-clvdz" event={"ID":"081d0234-b506-49ff-81c9-c535f6e1c588","Type":"ContainerStarted","Data":"ac9fdd2d1d1b165c1349b346bcc0d7a19010fb2fc0248e686441121ff3fe11b3"} Feb 03 10:04:59 crc kubenswrapper[5010]: I0203 10:04:59.764514 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-clvdz" event={"ID":"081d0234-b506-49ff-81c9-c535f6e1c588","Type":"ContainerStarted","Data":"13f48e6ab387ab1d95442d03eb875ad51364e131f58502ed226acd326e53d72e"} Feb 03 10:05:01 crc kubenswrapper[5010]: I0203 10:05:01.591010 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:05:01 crc kubenswrapper[5010]: I0203 10:05:01.596701 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:05:02 crc kubenswrapper[5010]: I0203 10:05:02.091193 5010 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvtp4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 03 10:05:02 crc kubenswrapper[5010]: I0203 10:05:02.091279 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvtp4" podUID="d8101cd0-5430-4786-bf8a-3d9c60ad1f7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 03 10:05:02 crc kubenswrapper[5010]: I0203 10:05:02.091621 5010 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvtp4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 03 10:05:02 crc kubenswrapper[5010]: I0203 10:05:02.091708 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jvtp4" podUID="d8101cd0-5430-4786-bf8a-3d9c60ad1f7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 03 10:05:07 crc kubenswrapper[5010]: I0203 10:05:07.538264 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:05:09 crc kubenswrapper[5010]: I0203 10:05:09.276541 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:05:11 crc kubenswrapper[5010]: I0203 10:05:11.070097 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 10:05:11 crc kubenswrapper[5010]: I0203 10:05:11.192637 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5f00703-7e5f-4c7b-85fe-ce7fb07b7431-kubelet-dir\") pod \"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431\" (UID: \"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431\") " Feb 03 10:05:11 crc kubenswrapper[5010]: I0203 10:05:11.192752 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f00703-7e5f-4c7b-85fe-ce7fb07b7431-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b5f00703-7e5f-4c7b-85fe-ce7fb07b7431" (UID: "b5f00703-7e5f-4c7b-85fe-ce7fb07b7431"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:05:11 crc kubenswrapper[5010]: I0203 10:05:11.192797 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5f00703-7e5f-4c7b-85fe-ce7fb07b7431-kube-api-access\") pod \"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431\" (UID: \"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431\") " Feb 03 10:05:11 crc kubenswrapper[5010]: I0203 10:05:11.193025 5010 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5f00703-7e5f-4c7b-85fe-ce7fb07b7431-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:11 crc kubenswrapper[5010]: I0203 10:05:11.199319 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5f00703-7e5f-4c7b-85fe-ce7fb07b7431-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b5f00703-7e5f-4c7b-85fe-ce7fb07b7431" (UID: "b5f00703-7e5f-4c7b-85fe-ce7fb07b7431"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:05:11 crc kubenswrapper[5010]: I0203 10:05:11.294523 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5f00703-7e5f-4c7b-85fe-ce7fb07b7431-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:11 crc kubenswrapper[5010]: I0203 10:05:11.830624 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431","Type":"ContainerDied","Data":"2204207b01823b33e27480642d42ad6ac24cd5512f2cf07c931779231850f28b"} Feb 03 10:05:11 crc kubenswrapper[5010]: I0203 10:05:11.830671 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2204207b01823b33e27480642d42ad6ac24cd5512f2cf07c931779231850f28b" Feb 03 10:05:11 crc kubenswrapper[5010]: I0203 10:05:11.830741 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 10:05:12 crc kubenswrapper[5010]: I0203 10:05:12.099181 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-jvtp4" Feb 03 10:05:16 crc kubenswrapper[5010]: I0203 10:05:16.390761 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:05:16 crc kubenswrapper[5010]: I0203 10:05:16.391132 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:05:17 crc kubenswrapper[5010]: E0203 10:05:17.118772 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 03 10:05:17 crc kubenswrapper[5010]: E0203 10:05:17.119518 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kmj7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-vqqgt_openshift-marketplace(bcb492ad-594e-4460-8a8b-3476a4a0ddfe): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 10:05:17 crc kubenswrapper[5010]: E0203 10:05:17.121189 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-vqqgt" podUID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" Feb 03 10:05:17 crc kubenswrapper[5010]: E0203 10:05:17.137945 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 03 10:05:17 crc kubenswrapper[5010]: E0203 10:05:17.138118 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ndvzg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-5pgxf_openshift-marketplace(777b0b1e-96c3-4914-8b7b-d51186433cb7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 10:05:17 crc kubenswrapper[5010]: E0203 10:05:17.139935 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-5pgxf" podUID="777b0b1e-96c3-4914-8b7b-d51186433cb7" Feb 03 10:05:18 crc kubenswrapper[5010]: E0203 10:05:18.407551 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-vqqgt" podUID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" Feb 03 10:05:18 crc kubenswrapper[5010]: E0203 10:05:18.408952 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-5pgxf" podUID="777b0b1e-96c3-4914-8b7b-d51186433cb7" Feb 03 10:05:18 crc kubenswrapper[5010]: E0203 10:05:18.500267 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 03 10:05:18 crc kubenswrapper[5010]: E0203 10:05:18.500476 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jmkxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-dgktg_openshift-marketplace(16b28bac-b8da-4fa7-8282-3b97ef4decac): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 10:05:18 crc kubenswrapper[5010]: E0203 10:05:18.501857 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-dgktg" podUID="16b28bac-b8da-4fa7-8282-3b97ef4decac" Feb 03 10:05:19 crc kubenswrapper[5010]: E0203 10:05:19.946464 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-dgktg" podUID="16b28bac-b8da-4fa7-8282-3b97ef4decac" Feb 03 10:05:20 crc kubenswrapper[5010]: E0203 10:05:20.016638 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 03 10:05:20 crc kubenswrapper[5010]: E0203 10:05:20.016815 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d2wnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9nhlj_openshift-marketplace(e7d7a138-50ca-4706-b760-2fe5154b2796): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 10:05:20 crc kubenswrapper[5010]: E0203 10:05:20.018008 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-9nhlj" podUID="e7d7a138-50ca-4706-b760-2fe5154b2796" Feb 03 10:05:20 crc kubenswrapper[5010]: E0203 10:05:20.029661 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 03 10:05:20 crc kubenswrapper[5010]: E0203 10:05:20.030459 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8rkwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rhsmk_openshift-marketplace(6b321403-09c3-4199-98ce-474deeea9d18): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 10:05:20 crc kubenswrapper[5010]: E0203 10:05:20.031583 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rhsmk" podUID="6b321403-09c3-4199-98ce-474deeea9d18" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.160126 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9nhlj" podUID="e7d7a138-50ca-4706-b760-2fe5154b2796" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.160176 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rhsmk" podUID="6b321403-09c3-4199-98ce-474deeea9d18" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.239550 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.239721 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cgmtk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rp7rd_openshift-marketplace(49f8db32-0c68-4c72-9aad-a02ce0c958aa): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.241097 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-rp7rd" podUID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.281907 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.282026 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjvqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-f8ldc_openshift-marketplace(5a09b802-00fe-4ff8-983e-58c495061478): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.283332 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-f8ldc" podUID="5a09b802-00fe-4ff8-983e-58c495061478" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.295380 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.295473 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mw58w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-w967c_openshift-marketplace(778b346c-f503-4364-9757-98c213d89edc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.296683 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-w967c" podUID="778b346c-f503-4364-9757-98c213d89edc" Feb 03 10:05:21 crc kubenswrapper[5010]: I0203 10:05:21.893491 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-clvdz" event={"ID":"081d0234-b506-49ff-81c9-c535f6e1c588","Type":"ContainerStarted","Data":"c28e6bed742dfead03b98be3eca12cec53662c93c11807c896c211e74fa98d69"} Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.896331 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rp7rd" podUID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.896346 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-w967c" podUID="778b346c-f503-4364-9757-98c213d89edc" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.896557 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-f8ldc" podUID="5a09b802-00fe-4ff8-983e-58c495061478" Feb 03 10:05:21 crc kubenswrapper[5010]: I0203 10:05:21.940081 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-clvdz" podStartSLOduration=166.940062777 podStartE2EDuration="2m46.940062777s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:05:21.936702501 +0000 UTC m=+192.092678630" watchObservedRunningTime="2026-02-03 10:05:21.940062777 +0000 UTC m=+192.096038926" Feb 03 10:05:22 crc kubenswrapper[5010]: I0203 10:05:22.489369 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" Feb 03 10:05:30 crc kubenswrapper[5010]: I0203 10:05:30.933010 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pgxf" event={"ID":"777b0b1e-96c3-4914-8b7b-d51186433cb7","Type":"ContainerStarted","Data":"8155e7f2f727e4e9e74359fe98f1783e8c9b620a89fe732296fe63f5146a208e"} Feb 03 10:05:31 crc kubenswrapper[5010]: I0203 10:05:31.938548 5010 generic.go:334] "Generic (PLEG): container finished" podID="777b0b1e-96c3-4914-8b7b-d51186433cb7" containerID="8155e7f2f727e4e9e74359fe98f1783e8c9b620a89fe732296fe63f5146a208e" exitCode=0 Feb 03 10:05:31 crc kubenswrapper[5010]: I0203 10:05:31.938622 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pgxf" event={"ID":"777b0b1e-96c3-4914-8b7b-d51186433cb7","Type":"ContainerDied","Data":"8155e7f2f727e4e9e74359fe98f1783e8c9b620a89fe732296fe63f5146a208e"} Feb 03 10:05:31 crc kubenswrapper[5010]: I0203 10:05:31.940990 5010 generic.go:334] "Generic (PLEG): container finished" podID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" containerID="23d25d23b886bcc187c1b9cd3f31af42a2e9d0581c448b9f8d3e75f9a6276513" exitCode=0 Feb 03 10:05:31 crc kubenswrapper[5010]: I0203 10:05:31.941020 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vqqgt" event={"ID":"bcb492ad-594e-4460-8a8b-3476a4a0ddfe","Type":"ContainerDied","Data":"23d25d23b886bcc187c1b9cd3f31af42a2e9d0581c448b9f8d3e75f9a6276513"} Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.700490 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 03 10:05:32 crc kubenswrapper[5010]: E0203 10:05:32.701048 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f00703-7e5f-4c7b-85fe-ce7fb07b7431" containerName="pruner" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.701070 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f00703-7e5f-4c7b-85fe-ce7fb07b7431" containerName="pruner" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.701190 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f00703-7e5f-4c7b-85fe-ce7fb07b7431" containerName="pruner" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.701645 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.703254 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.703652 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.709707 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.791998 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81299ba1-c345-43b2-ac1b-78107f12ed8c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"81299ba1-c345-43b2-ac1b-78107f12ed8c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.792086 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81299ba1-c345-43b2-ac1b-78107f12ed8c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"81299ba1-c345-43b2-ac1b-78107f12ed8c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.893318 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81299ba1-c345-43b2-ac1b-78107f12ed8c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"81299ba1-c345-43b2-ac1b-78107f12ed8c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.893422 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81299ba1-c345-43b2-ac1b-78107f12ed8c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"81299ba1-c345-43b2-ac1b-78107f12ed8c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.893448 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81299ba1-c345-43b2-ac1b-78107f12ed8c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"81299ba1-c345-43b2-ac1b-78107f12ed8c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.912444 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81299ba1-c345-43b2-ac1b-78107f12ed8c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"81299ba1-c345-43b2-ac1b-78107f12ed8c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.947519 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vqqgt" event={"ID":"bcb492ad-594e-4460-8a8b-3476a4a0ddfe","Type":"ContainerStarted","Data":"7d30f3b060cc0d586383cb9de6a300c34ce671caf4098a60fda10d9a98201907"} Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.949327 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pgxf" event={"ID":"777b0b1e-96c3-4914-8b7b-d51186433cb7","Type":"ContainerStarted","Data":"64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b"} Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.966179 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vqqgt" podStartSLOduration=3.20027069 podStartE2EDuration="41.966161171s" podCreationTimestamp="2026-02-03 10:04:51 +0000 UTC" firstStartedPulling="2026-02-03 10:04:53.661466586 +0000 UTC m=+163.817442715" lastFinishedPulling="2026-02-03 10:05:32.427357067 +0000 UTC m=+202.583333196" observedRunningTime="2026-02-03 10:05:32.962572432 +0000 UTC m=+203.118548571" watchObservedRunningTime="2026-02-03 10:05:32.966161171 +0000 UTC m=+203.122137300" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.984539 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5pgxf" podStartSLOduration=3.185124851 podStartE2EDuration="41.984517984s" podCreationTimestamp="2026-02-03 10:04:51 +0000 UTC" firstStartedPulling="2026-02-03 10:04:53.619842473 +0000 UTC m=+163.775818592" lastFinishedPulling="2026-02-03 10:05:32.419235596 +0000 UTC m=+202.575211725" observedRunningTime="2026-02-03 10:05:32.979760336 +0000 UTC m=+203.135736485" watchObservedRunningTime="2026-02-03 10:05:32.984517984 +0000 UTC m=+203.140494123" Feb 03 10:05:33 crc kubenswrapper[5010]: I0203 10:05:33.025784 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 10:05:33 crc kubenswrapper[5010]: I0203 10:05:33.204046 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 03 10:05:33 crc kubenswrapper[5010]: I0203 10:05:33.957501 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"81299ba1-c345-43b2-ac1b-78107f12ed8c","Type":"ContainerStarted","Data":"52d09727f2737181bd5292c49f0a0cb1d6b02cc9ba3925b005189292d769e5fd"} Feb 03 10:05:33 crc kubenswrapper[5010]: I0203 10:05:33.957838 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"81299ba1-c345-43b2-ac1b-78107f12ed8c","Type":"ContainerStarted","Data":"82dec6b7308cef3963c8d4acc2aea78d67df12d2c0c84d234d23c8d27a34b151"} Feb 03 10:05:34 crc kubenswrapper[5010]: I0203 10:05:34.521366 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.5213477319999997 podStartE2EDuration="2.521347732s" podCreationTimestamp="2026-02-03 10:05:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:05:33.973954557 +0000 UTC m=+204.129930686" watchObservedRunningTime="2026-02-03 10:05:34.521347732 +0000 UTC m=+204.677323861" Feb 03 10:05:34 crc kubenswrapper[5010]: I0203 10:05:34.965029 5010 generic.go:334] "Generic (PLEG): container finished" podID="81299ba1-c345-43b2-ac1b-78107f12ed8c" containerID="52d09727f2737181bd5292c49f0a0cb1d6b02cc9ba3925b005189292d769e5fd" exitCode=0 Feb 03 10:05:34 crc kubenswrapper[5010]: I0203 10:05:34.965080 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"81299ba1-c345-43b2-ac1b-78107f12ed8c","Type":"ContainerDied","Data":"52d09727f2737181bd5292c49f0a0cb1d6b02cc9ba3925b005189292d769e5fd"} Feb 03 10:05:35 crc kubenswrapper[5010]: I0203 10:05:35.973803 5010 generic.go:334] "Generic (PLEG): container finished" podID="16b28bac-b8da-4fa7-8282-3b97ef4decac" containerID="bcc654dbe8169a28cffacbe314417d4a4611832d125b611e388eb693549fa2c4" exitCode=0 Feb 03 10:05:35 crc kubenswrapper[5010]: I0203 10:05:35.973881 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgktg" event={"ID":"16b28bac-b8da-4fa7-8282-3b97ef4decac","Type":"ContainerDied","Data":"bcc654dbe8169a28cffacbe314417d4a4611832d125b611e388eb693549fa2c4"} Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.225192 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.249080 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81299ba1-c345-43b2-ac1b-78107f12ed8c-kubelet-dir\") pod \"81299ba1-c345-43b2-ac1b-78107f12ed8c\" (UID: \"81299ba1-c345-43b2-ac1b-78107f12ed8c\") " Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.249361 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81299ba1-c345-43b2-ac1b-78107f12ed8c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "81299ba1-c345-43b2-ac1b-78107f12ed8c" (UID: "81299ba1-c345-43b2-ac1b-78107f12ed8c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.249378 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81299ba1-c345-43b2-ac1b-78107f12ed8c-kube-api-access\") pod \"81299ba1-c345-43b2-ac1b-78107f12ed8c\" (UID: \"81299ba1-c345-43b2-ac1b-78107f12ed8c\") " Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.249636 5010 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81299ba1-c345-43b2-ac1b-78107f12ed8c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.253818 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81299ba1-c345-43b2-ac1b-78107f12ed8c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "81299ba1-c345-43b2-ac1b-78107f12ed8c" (UID: "81299ba1-c345-43b2-ac1b-78107f12ed8c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.382388 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81299ba1-c345-43b2-ac1b-78107f12ed8c-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.981294 5010 generic.go:334] "Generic (PLEG): container finished" podID="e7d7a138-50ca-4706-b760-2fe5154b2796" containerID="730f222e342318bae796254f04e4df63b050039401e8b81d0b3edfa6109b624a" exitCode=0 Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.981332 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nhlj" event={"ID":"e7d7a138-50ca-4706-b760-2fe5154b2796","Type":"ContainerDied","Data":"730f222e342318bae796254f04e4df63b050039401e8b81d0b3edfa6109b624a"} Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.984286 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgktg" event={"ID":"16b28bac-b8da-4fa7-8282-3b97ef4decac","Type":"ContainerStarted","Data":"fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff"} Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.985856 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"81299ba1-c345-43b2-ac1b-78107f12ed8c","Type":"ContainerDied","Data":"82dec6b7308cef3963c8d4acc2aea78d67df12d2c0c84d234d23c8d27a34b151"} Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.985881 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82dec6b7308cef3963c8d4acc2aea78d67df12d2c0c84d234d23c8d27a34b151" Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.985908 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.999605 5010 generic.go:334] "Generic (PLEG): container finished" podID="6b321403-09c3-4199-98ce-474deeea9d18" containerID="ad30fa1f7476d320a459e2e205f7b55a08c426642d715abf9ce2c1d8b8336f6e" exitCode=0 Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.999754 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhsmk" event={"ID":"6b321403-09c3-4199-98ce-474deeea9d18","Type":"ContainerDied","Data":"ad30fa1f7476d320a459e2e205f7b55a08c426642d715abf9ce2c1d8b8336f6e"} Feb 03 10:05:37 crc kubenswrapper[5010]: I0203 10:05:37.005060 5010 generic.go:334] "Generic (PLEG): container finished" podID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" containerID="fe10503b93985181eb829a3f8a8e717bf9280acf1b8141cb971cdc624c555ee7" exitCode=0 Feb 03 10:05:37 crc kubenswrapper[5010]: I0203 10:05:37.005153 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp7rd" event={"ID":"49f8db32-0c68-4c72-9aad-a02ce0c958aa","Type":"ContainerDied","Data":"fe10503b93985181eb829a3f8a8e717bf9280acf1b8141cb971cdc624c555ee7"} Feb 03 10:05:37 crc kubenswrapper[5010]: I0203 10:05:37.028602 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dgktg" podStartSLOduration=2.917250564 podStartE2EDuration="49.028583744s" podCreationTimestamp="2026-02-03 10:04:48 +0000 UTC" firstStartedPulling="2026-02-03 10:04:50.453560343 +0000 UTC m=+160.609536462" lastFinishedPulling="2026-02-03 10:05:36.564893513 +0000 UTC m=+206.720869642" observedRunningTime="2026-02-03 10:05:37.027134828 +0000 UTC m=+207.183110957" watchObservedRunningTime="2026-02-03 10:05:37.028583744 +0000 UTC m=+207.184559883" Feb 03 10:05:39 crc kubenswrapper[5010]: I0203 10:05:39.015947 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp7rd" event={"ID":"49f8db32-0c68-4c72-9aad-a02ce0c958aa","Type":"ContainerStarted","Data":"435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26"} Feb 03 10:05:39 crc kubenswrapper[5010]: I0203 10:05:39.034257 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rp7rd" podStartSLOduration=3.177812653 podStartE2EDuration="49.034240989s" podCreationTimestamp="2026-02-03 10:04:50 +0000 UTC" firstStartedPulling="2026-02-03 10:04:52.573074852 +0000 UTC m=+162.729050981" lastFinishedPulling="2026-02-03 10:05:38.429503188 +0000 UTC m=+208.585479317" observedRunningTime="2026-02-03 10:05:39.032342902 +0000 UTC m=+209.188319041" watchObservedRunningTime="2026-02-03 10:05:39.034240989 +0000 UTC m=+209.190217118" Feb 03 10:05:39 crc kubenswrapper[5010]: I0203 10:05:39.230241 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:05:39 crc kubenswrapper[5010]: I0203 10:05:39.230635 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:05:39 crc kubenswrapper[5010]: I0203 10:05:39.982369 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.101036 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 03 10:05:40 crc kubenswrapper[5010]: E0203 10:05:40.101241 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81299ba1-c345-43b2-ac1b-78107f12ed8c" containerName="pruner" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.101252 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="81299ba1-c345-43b2-ac1b-78107f12ed8c" containerName="pruner" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.101361 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="81299ba1-c345-43b2-ac1b-78107f12ed8c" containerName="pruner" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.101691 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.103876 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.103931 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.116744 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.232359 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4b0e53-f63d-4ccf-a718-389b959a66c4-kube-api-access\") pod \"installer-9-crc\" (UID: \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.232419 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7c4b0e53-f63d-4ccf-a718-389b959a66c4-var-lock\") pod \"installer-9-crc\" (UID: \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.232461 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c4b0e53-f63d-4ccf-a718-389b959a66c4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.333419 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c4b0e53-f63d-4ccf-a718-389b959a66c4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.333487 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4b0e53-f63d-4ccf-a718-389b959a66c4-kube-api-access\") pod \"installer-9-crc\" (UID: \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.333529 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7c4b0e53-f63d-4ccf-a718-389b959a66c4-var-lock\") pod \"installer-9-crc\" (UID: \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.333573 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c4b0e53-f63d-4ccf-a718-389b959a66c4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.333649 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7c4b0e53-f63d-4ccf-a718-389b959a66c4-var-lock\") pod \"installer-9-crc\" (UID: \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.355348 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4b0e53-f63d-4ccf-a718-389b959a66c4-kube-api-access\") pod \"installer-9-crc\" (UID: \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.417923 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.659560 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.037317 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7c4b0e53-f63d-4ccf-a718-389b959a66c4","Type":"ContainerStarted","Data":"47e2fb47d49372688a6df246f47c04ec60321886600acbad24a608754f55694c"} Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.040425 5010 generic.go:334] "Generic (PLEG): container finished" podID="778b346c-f503-4364-9757-98c213d89edc" containerID="699afee0a95665e8a36e41507d5ccbe7b3ccff56912d72c7d06a736bf812bbdd" exitCode=0 Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.040477 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w967c" event={"ID":"778b346c-f503-4364-9757-98c213d89edc","Type":"ContainerDied","Data":"699afee0a95665e8a36e41507d5ccbe7b3ccff56912d72c7d06a736bf812bbdd"} Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.045626 5010 generic.go:334] "Generic (PLEG): container finished" podID="5a09b802-00fe-4ff8-983e-58c495061478" containerID="f7246dd3bc99c4cd6a1502b56f24cd3f2d35a480eabcd5540eeeffabedaf8c50" exitCode=0 Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.045692 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8ldc" event={"ID":"5a09b802-00fe-4ff8-983e-58c495061478","Type":"ContainerDied","Data":"f7246dd3bc99c4cd6a1502b56f24cd3f2d35a480eabcd5540eeeffabedaf8c50"} Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.048647 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhsmk" event={"ID":"6b321403-09c3-4199-98ce-474deeea9d18","Type":"ContainerStarted","Data":"3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f"} Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.050928 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nhlj" event={"ID":"e7d7a138-50ca-4706-b760-2fe5154b2796","Type":"ContainerStarted","Data":"179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab"} Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.100696 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9nhlj" podStartSLOduration=3.7183660659999997 podStartE2EDuration="53.100682045s" podCreationTimestamp="2026-02-03 10:04:48 +0000 UTC" firstStartedPulling="2026-02-03 10:04:50.449967471 +0000 UTC m=+160.605943590" lastFinishedPulling="2026-02-03 10:05:39.83228344 +0000 UTC m=+209.988259569" observedRunningTime="2026-02-03 10:05:41.098306877 +0000 UTC m=+211.254283016" watchObservedRunningTime="2026-02-03 10:05:41.100682045 +0000 UTC m=+211.256658174" Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.116629 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rhsmk" podStartSLOduration=2.590496173 podStartE2EDuration="53.116608828s" podCreationTimestamp="2026-02-03 10:04:48 +0000 UTC" firstStartedPulling="2026-02-03 10:04:49.426707229 +0000 UTC m=+159.582683358" lastFinishedPulling="2026-02-03 10:05:39.952819884 +0000 UTC m=+210.108796013" observedRunningTime="2026-02-03 10:05:41.114171598 +0000 UTC m=+211.270147727" watchObservedRunningTime="2026-02-03 10:05:41.116608828 +0000 UTC m=+211.272584957" Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.254315 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.254365 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.291402 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.835139 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.835425 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.890880 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:05:42 crc kubenswrapper[5010]: I0203 10:05:42.057677 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w967c" event={"ID":"778b346c-f503-4364-9757-98c213d89edc","Type":"ContainerStarted","Data":"d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08"} Feb 03 10:05:42 crc kubenswrapper[5010]: I0203 10:05:42.059183 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7c4b0e53-f63d-4ccf-a718-389b959a66c4","Type":"ContainerStarted","Data":"8235871772bfab300d8b3a5a6ad3309af90a9d4729dea3e53a02ffdbbd8677af"} Feb 03 10:05:42 crc kubenswrapper[5010]: I0203 10:05:42.075111 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w967c" podStartSLOduration=3.140853491 podStartE2EDuration="52.075094106s" podCreationTimestamp="2026-02-03 10:04:50 +0000 UTC" firstStartedPulling="2026-02-03 10:04:52.512044927 +0000 UTC m=+162.668021046" lastFinishedPulling="2026-02-03 10:05:41.446285532 +0000 UTC m=+211.602261661" observedRunningTime="2026-02-03 10:05:42.073737343 +0000 UTC m=+212.229713472" watchObservedRunningTime="2026-02-03 10:05:42.075094106 +0000 UTC m=+212.231070235" Feb 03 10:05:42 crc kubenswrapper[5010]: I0203 10:05:42.110448 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:05:42 crc kubenswrapper[5010]: I0203 10:05:42.132409 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.13239129 podStartE2EDuration="2.13239129s" podCreationTimestamp="2026-02-03 10:05:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:05:42.097198162 +0000 UTC m=+212.253174291" watchObservedRunningTime="2026-02-03 10:05:42.13239129 +0000 UTC m=+212.288367419" Feb 03 10:05:42 crc kubenswrapper[5010]: I0203 10:05:42.275126 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:05:42 crc kubenswrapper[5010]: I0203 10:05:42.275188 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:05:42 crc kubenswrapper[5010]: I0203 10:05:42.320313 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:05:43 crc kubenswrapper[5010]: I0203 10:05:43.118069 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:05:44 crc kubenswrapper[5010]: I0203 10:05:44.069944 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8ldc" event={"ID":"5a09b802-00fe-4ff8-983e-58c495061478","Type":"ContainerStarted","Data":"6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac"} Feb 03 10:05:45 crc kubenswrapper[5010]: I0203 10:05:45.928245 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f8ldc" podStartSLOduration=3.746774883 podStartE2EDuration="57.928225135s" podCreationTimestamp="2026-02-03 10:04:48 +0000 UTC" firstStartedPulling="2026-02-03 10:04:49.419089892 +0000 UTC m=+159.575066021" lastFinishedPulling="2026-02-03 10:05:43.600540124 +0000 UTC m=+213.756516273" observedRunningTime="2026-02-03 10:05:44.089657103 +0000 UTC m=+214.245633262" watchObservedRunningTime="2026-02-03 10:05:45.928225135 +0000 UTC m=+216.084201274" Feb 03 10:05:45 crc kubenswrapper[5010]: I0203 10:05:45.929618 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vqqgt"] Feb 03 10:05:45 crc kubenswrapper[5010]: I0203 10:05:45.929849 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vqqgt" podUID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" containerName="registry-server" containerID="cri-o://7d30f3b060cc0d586383cb9de6a300c34ce671caf4098a60fda10d9a98201907" gracePeriod=2 Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.085036 5010 generic.go:334] "Generic (PLEG): container finished" podID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" containerID="7d30f3b060cc0d586383cb9de6a300c34ce671caf4098a60fda10d9a98201907" exitCode=0 Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.085077 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vqqgt" event={"ID":"bcb492ad-594e-4460-8a8b-3476a4a0ddfe","Type":"ContainerDied","Data":"7d30f3b060cc0d586383cb9de6a300c34ce671caf4098a60fda10d9a98201907"} Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.266256 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.310852 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmj7d\" (UniqueName: \"kubernetes.io/projected/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-kube-api-access-kmj7d\") pod \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\" (UID: \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\") " Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.310999 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-utilities\") pod \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\" (UID: \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\") " Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.311053 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-catalog-content\") pod \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\" (UID: \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\") " Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.312445 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-utilities" (OuterVolumeSpecName: "utilities") pod "bcb492ad-594e-4460-8a8b-3476a4a0ddfe" (UID: "bcb492ad-594e-4460-8a8b-3476a4a0ddfe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.317704 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-kube-api-access-kmj7d" (OuterVolumeSpecName: "kube-api-access-kmj7d") pod "bcb492ad-594e-4460-8a8b-3476a4a0ddfe" (UID: "bcb492ad-594e-4460-8a8b-3476a4a0ddfe"). InnerVolumeSpecName "kube-api-access-kmj7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.390410 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.390472 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.390521 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.391054 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.391188 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb" gracePeriod=600 Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.413154 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.413187 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmj7d\" (UniqueName: \"kubernetes.io/projected/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-kube-api-access-kmj7d\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.434737 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bcb492ad-594e-4460-8a8b-3476a4a0ddfe" (UID: "bcb492ad-594e-4460-8a8b-3476a4a0ddfe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.514569 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:47 crc kubenswrapper[5010]: I0203 10:05:47.096482 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb" exitCode=0 Feb 03 10:05:47 crc kubenswrapper[5010]: I0203 10:05:47.096567 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb"} Feb 03 10:05:47 crc kubenswrapper[5010]: I0203 10:05:47.100439 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vqqgt" event={"ID":"bcb492ad-594e-4460-8a8b-3476a4a0ddfe","Type":"ContainerDied","Data":"b03e103076d38aa5bbbd68150acf3238a80f5aa11d029cd0429d26318865532f"} Feb 03 10:05:47 crc kubenswrapper[5010]: I0203 10:05:47.100514 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:05:47 crc kubenswrapper[5010]: I0203 10:05:47.100520 5010 scope.go:117] "RemoveContainer" containerID="7d30f3b060cc0d586383cb9de6a300c34ce671caf4098a60fda10d9a98201907" Feb 03 10:05:47 crc kubenswrapper[5010]: I0203 10:05:47.133466 5010 scope.go:117] "RemoveContainer" containerID="23d25d23b886bcc187c1b9cd3f31af42a2e9d0581c448b9f8d3e75f9a6276513" Feb 03 10:05:47 crc kubenswrapper[5010]: I0203 10:05:47.140227 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vqqgt"] Feb 03 10:05:47 crc kubenswrapper[5010]: I0203 10:05:47.143437 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vqqgt"] Feb 03 10:05:47 crc kubenswrapper[5010]: I0203 10:05:47.149433 5010 scope.go:117] "RemoveContainer" containerID="e368cf1e860ceec201b26f8820d913ac5d90d18137dd55d145c59832181c9831" Feb 03 10:05:48 crc kubenswrapper[5010]: I0203 10:05:48.109458 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"f50e55cc732f578ead4018fcd8ab51937afcd54061bf1c5885e82d08d42bd4d4"} Feb 03 10:05:48 crc kubenswrapper[5010]: I0203 10:05:48.512420 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" path="/var/lib/kubelet/pods/bcb492ad-594e-4460-8a8b-3476a4a0ddfe/volumes" Feb 03 10:05:48 crc kubenswrapper[5010]: I0203 10:05:48.630119 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:05:48 crc kubenswrapper[5010]: I0203 10:05:48.630238 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:05:48 crc kubenswrapper[5010]: I0203 10:05:48.677295 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:05:48 crc kubenswrapper[5010]: I0203 10:05:48.835791 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:05:48 crc kubenswrapper[5010]: I0203 10:05:48.835859 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:05:48 crc kubenswrapper[5010]: I0203 10:05:48.932392 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:05:49 crc kubenswrapper[5010]: I0203 10:05:49.028137 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:05:49 crc kubenswrapper[5010]: I0203 10:05:49.028190 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:05:49 crc kubenswrapper[5010]: I0203 10:05:49.075787 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:05:49 crc kubenswrapper[5010]: I0203 10:05:49.154699 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:05:49 crc kubenswrapper[5010]: I0203 10:05:49.157461 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:05:49 crc kubenswrapper[5010]: I0203 10:05:49.161698 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:05:49 crc kubenswrapper[5010]: I0203 10:05:49.274469 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:05:50 crc kubenswrapper[5010]: I0203 10:05:50.735956 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9nhlj"] Feb 03 10:05:50 crc kubenswrapper[5010]: I0203 10:05:50.849424 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:05:50 crc kubenswrapper[5010]: I0203 10:05:50.850157 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:05:50 crc kubenswrapper[5010]: I0203 10:05:50.916933 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.133377 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9nhlj" podUID="e7d7a138-50ca-4706-b760-2fe5154b2796" containerName="registry-server" containerID="cri-o://179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab" gracePeriod=2 Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.190558 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.297662 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.328853 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dgktg"] Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.329062 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dgktg" podUID="16b28bac-b8da-4fa7-8282-3b97ef4decac" containerName="registry-server" containerID="cri-o://fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff" gracePeriod=2 Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.494288 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.583650 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d7a138-50ca-4706-b760-2fe5154b2796-utilities\") pod \"e7d7a138-50ca-4706-b760-2fe5154b2796\" (UID: \"e7d7a138-50ca-4706-b760-2fe5154b2796\") " Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.583722 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d7a138-50ca-4706-b760-2fe5154b2796-catalog-content\") pod \"e7d7a138-50ca-4706-b760-2fe5154b2796\" (UID: \"e7d7a138-50ca-4706-b760-2fe5154b2796\") " Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.583787 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2wnb\" (UniqueName: \"kubernetes.io/projected/e7d7a138-50ca-4706-b760-2fe5154b2796-kube-api-access-d2wnb\") pod \"e7d7a138-50ca-4706-b760-2fe5154b2796\" (UID: \"e7d7a138-50ca-4706-b760-2fe5154b2796\") " Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.585289 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7d7a138-50ca-4706-b760-2fe5154b2796-utilities" (OuterVolumeSpecName: "utilities") pod "e7d7a138-50ca-4706-b760-2fe5154b2796" (UID: "e7d7a138-50ca-4706-b760-2fe5154b2796"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.592397 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7d7a138-50ca-4706-b760-2fe5154b2796-kube-api-access-d2wnb" (OuterVolumeSpecName: "kube-api-access-d2wnb") pod "e7d7a138-50ca-4706-b760-2fe5154b2796" (UID: "e7d7a138-50ca-4706-b760-2fe5154b2796"). InnerVolumeSpecName "kube-api-access-d2wnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.635801 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.635986 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7d7a138-50ca-4706-b760-2fe5154b2796-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7d7a138-50ca-4706-b760-2fe5154b2796" (UID: "e7d7a138-50ca-4706-b760-2fe5154b2796"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.685259 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmkxt\" (UniqueName: \"kubernetes.io/projected/16b28bac-b8da-4fa7-8282-3b97ef4decac-kube-api-access-jmkxt\") pod \"16b28bac-b8da-4fa7-8282-3b97ef4decac\" (UID: \"16b28bac-b8da-4fa7-8282-3b97ef4decac\") " Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.685531 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16b28bac-b8da-4fa7-8282-3b97ef4decac-utilities\") pod \"16b28bac-b8da-4fa7-8282-3b97ef4decac\" (UID: \"16b28bac-b8da-4fa7-8282-3b97ef4decac\") " Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.685570 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16b28bac-b8da-4fa7-8282-3b97ef4decac-catalog-content\") pod \"16b28bac-b8da-4fa7-8282-3b97ef4decac\" (UID: \"16b28bac-b8da-4fa7-8282-3b97ef4decac\") " Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.685805 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d7a138-50ca-4706-b760-2fe5154b2796-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.685817 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d7a138-50ca-4706-b760-2fe5154b2796-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.685828 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2wnb\" (UniqueName: \"kubernetes.io/projected/e7d7a138-50ca-4706-b760-2fe5154b2796-kube-api-access-d2wnb\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.686396 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16b28bac-b8da-4fa7-8282-3b97ef4decac-utilities" (OuterVolumeSpecName: "utilities") pod "16b28bac-b8da-4fa7-8282-3b97ef4decac" (UID: "16b28bac-b8da-4fa7-8282-3b97ef4decac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.689631 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16b28bac-b8da-4fa7-8282-3b97ef4decac-kube-api-access-jmkxt" (OuterVolumeSpecName: "kube-api-access-jmkxt") pod "16b28bac-b8da-4fa7-8282-3b97ef4decac" (UID: "16b28bac-b8da-4fa7-8282-3b97ef4decac"). InnerVolumeSpecName "kube-api-access-jmkxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.730426 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16b28bac-b8da-4fa7-8282-3b97ef4decac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16b28bac-b8da-4fa7-8282-3b97ef4decac" (UID: "16b28bac-b8da-4fa7-8282-3b97ef4decac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.786934 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmkxt\" (UniqueName: \"kubernetes.io/projected/16b28bac-b8da-4fa7-8282-3b97ef4decac-kube-api-access-jmkxt\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.786987 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16b28bac-b8da-4fa7-8282-3b97ef4decac-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.787008 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16b28bac-b8da-4fa7-8282-3b97ef4decac-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.139411 5010 generic.go:334] "Generic (PLEG): container finished" podID="16b28bac-b8da-4fa7-8282-3b97ef4decac" containerID="fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff" exitCode=0 Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.139487 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgktg" event={"ID":"16b28bac-b8da-4fa7-8282-3b97ef4decac","Type":"ContainerDied","Data":"fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff"} Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.139495 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.139514 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgktg" event={"ID":"16b28bac-b8da-4fa7-8282-3b97ef4decac","Type":"ContainerDied","Data":"f8067043c468ce02991a947f5558cbe6d87a64ec40b08e86c4e947e44eed14bc"} Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.139529 5010 scope.go:117] "RemoveContainer" containerID="fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.145549 5010 generic.go:334] "Generic (PLEG): container finished" podID="e7d7a138-50ca-4706-b760-2fe5154b2796" containerID="179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab" exitCode=0 Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.145590 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nhlj" event={"ID":"e7d7a138-50ca-4706-b760-2fe5154b2796","Type":"ContainerDied","Data":"179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab"} Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.145629 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nhlj" event={"ID":"e7d7a138-50ca-4706-b760-2fe5154b2796","Type":"ContainerDied","Data":"1b0c23388be323142da658c9f60348ab9cd0cc51111e7de9f4e1bb46c8a6bc8a"} Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.145568 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.157716 5010 scope.go:117] "RemoveContainer" containerID="bcc654dbe8169a28cffacbe314417d4a4611832d125b611e388eb693549fa2c4" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.180147 5010 scope.go:117] "RemoveContainer" containerID="3a76abe4c5364f44f09a54270bc240290cf286a9884d39d2982b2da16ddcac0f" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.181631 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dgktg"] Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.188479 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dgktg"] Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.197311 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9nhlj"] Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.203336 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9nhlj"] Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.213097 5010 scope.go:117] "RemoveContainer" containerID="fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff" Feb 03 10:05:52 crc kubenswrapper[5010]: E0203 10:05:52.214747 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff\": container with ID starting with fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff not found: ID does not exist" containerID="fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.214802 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff"} err="failed to get container status \"fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff\": rpc error: code = NotFound desc = could not find container \"fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff\": container with ID starting with fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff not found: ID does not exist" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.214835 5010 scope.go:117] "RemoveContainer" containerID="bcc654dbe8169a28cffacbe314417d4a4611832d125b611e388eb693549fa2c4" Feb 03 10:05:52 crc kubenswrapper[5010]: E0203 10:05:52.215770 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcc654dbe8169a28cffacbe314417d4a4611832d125b611e388eb693549fa2c4\": container with ID starting with bcc654dbe8169a28cffacbe314417d4a4611832d125b611e388eb693549fa2c4 not found: ID does not exist" containerID="bcc654dbe8169a28cffacbe314417d4a4611832d125b611e388eb693549fa2c4" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.215826 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcc654dbe8169a28cffacbe314417d4a4611832d125b611e388eb693549fa2c4"} err="failed to get container status \"bcc654dbe8169a28cffacbe314417d4a4611832d125b611e388eb693549fa2c4\": rpc error: code = NotFound desc = could not find container \"bcc654dbe8169a28cffacbe314417d4a4611832d125b611e388eb693549fa2c4\": container with ID starting with bcc654dbe8169a28cffacbe314417d4a4611832d125b611e388eb693549fa2c4 not found: ID does not exist" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.215855 5010 scope.go:117] "RemoveContainer" containerID="3a76abe4c5364f44f09a54270bc240290cf286a9884d39d2982b2da16ddcac0f" Feb 03 10:05:52 crc kubenswrapper[5010]: E0203 10:05:52.216300 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a76abe4c5364f44f09a54270bc240290cf286a9884d39d2982b2da16ddcac0f\": container with ID starting with 3a76abe4c5364f44f09a54270bc240290cf286a9884d39d2982b2da16ddcac0f not found: ID does not exist" containerID="3a76abe4c5364f44f09a54270bc240290cf286a9884d39d2982b2da16ddcac0f" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.216344 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a76abe4c5364f44f09a54270bc240290cf286a9884d39d2982b2da16ddcac0f"} err="failed to get container status \"3a76abe4c5364f44f09a54270bc240290cf286a9884d39d2982b2da16ddcac0f\": rpc error: code = NotFound desc = could not find container \"3a76abe4c5364f44f09a54270bc240290cf286a9884d39d2982b2da16ddcac0f\": container with ID starting with 3a76abe4c5364f44f09a54270bc240290cf286a9884d39d2982b2da16ddcac0f not found: ID does not exist" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.216373 5010 scope.go:117] "RemoveContainer" containerID="179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.227286 5010 scope.go:117] "RemoveContainer" containerID="730f222e342318bae796254f04e4df63b050039401e8b81d0b3edfa6109b624a" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.240453 5010 scope.go:117] "RemoveContainer" containerID="6c34e521910561d744489bcc04d63bb60f01ae814df1e11ab8b27bfb522f2dcf" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.270053 5010 scope.go:117] "RemoveContainer" containerID="179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab" Feb 03 10:05:52 crc kubenswrapper[5010]: E0203 10:05:52.270438 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab\": container with ID starting with 179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab not found: ID does not exist" containerID="179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.270473 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab"} err="failed to get container status \"179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab\": rpc error: code = NotFound desc = could not find container \"179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab\": container with ID starting with 179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab not found: ID does not exist" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.270495 5010 scope.go:117] "RemoveContainer" containerID="730f222e342318bae796254f04e4df63b050039401e8b81d0b3edfa6109b624a" Feb 03 10:05:52 crc kubenswrapper[5010]: E0203 10:05:52.270723 5010 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16b28bac_b8da_4fa7_8282_3b97ef4decac.slice/crio-f8067043c468ce02991a947f5558cbe6d87a64ec40b08e86c4e947e44eed14bc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7d7a138_50ca_4706_b760_2fe5154b2796.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16b28bac_b8da_4fa7_8282_3b97ef4decac.slice\": RecentStats: unable to find data in memory cache]" Feb 03 10:05:52 crc kubenswrapper[5010]: E0203 10:05:52.270755 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"730f222e342318bae796254f04e4df63b050039401e8b81d0b3edfa6109b624a\": container with ID starting with 730f222e342318bae796254f04e4df63b050039401e8b81d0b3edfa6109b624a not found: ID does not exist" containerID="730f222e342318bae796254f04e4df63b050039401e8b81d0b3edfa6109b624a" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.270778 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"730f222e342318bae796254f04e4df63b050039401e8b81d0b3edfa6109b624a"} err="failed to get container status \"730f222e342318bae796254f04e4df63b050039401e8b81d0b3edfa6109b624a\": rpc error: code = NotFound desc = could not find container \"730f222e342318bae796254f04e4df63b050039401e8b81d0b3edfa6109b624a\": container with ID starting with 730f222e342318bae796254f04e4df63b050039401e8b81d0b3edfa6109b624a not found: ID does not exist" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.270792 5010 scope.go:117] "RemoveContainer" containerID="6c34e521910561d744489bcc04d63bb60f01ae814df1e11ab8b27bfb522f2dcf" Feb 03 10:05:52 crc kubenswrapper[5010]: E0203 10:05:52.271008 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c34e521910561d744489bcc04d63bb60f01ae814df1e11ab8b27bfb522f2dcf\": container with ID starting with 6c34e521910561d744489bcc04d63bb60f01ae814df1e11ab8b27bfb522f2dcf not found: ID does not exist" containerID="6c34e521910561d744489bcc04d63bb60f01ae814df1e11ab8b27bfb522f2dcf" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.271031 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c34e521910561d744489bcc04d63bb60f01ae814df1e11ab8b27bfb522f2dcf"} err="failed to get container status \"6c34e521910561d744489bcc04d63bb60f01ae814df1e11ab8b27bfb522f2dcf\": rpc error: code = NotFound desc = could not find container \"6c34e521910561d744489bcc04d63bb60f01ae814df1e11ab8b27bfb522f2dcf\": container with ID starting with 6c34e521910561d744489bcc04d63bb60f01ae814df1e11ab8b27bfb522f2dcf not found: ID does not exist" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.511728 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16b28bac-b8da-4fa7-8282-3b97ef4decac" path="/var/lib/kubelet/pods/16b28bac-b8da-4fa7-8282-3b97ef4decac/volumes" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.512948 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7d7a138-50ca-4706-b760-2fe5154b2796" path="/var/lib/kubelet/pods/e7d7a138-50ca-4706-b760-2fe5154b2796/volumes" Feb 03 10:05:53 crc kubenswrapper[5010]: I0203 10:05:53.736351 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp7rd"] Feb 03 10:05:53 crc kubenswrapper[5010]: I0203 10:05:53.736687 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rp7rd" podUID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" containerName="registry-server" containerID="cri-o://435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26" gracePeriod=2 Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.070258 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.113476 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f8db32-0c68-4c72-9aad-a02ce0c958aa-catalog-content\") pod \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\" (UID: \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\") " Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.113532 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgmtk\" (UniqueName: \"kubernetes.io/projected/49f8db32-0c68-4c72-9aad-a02ce0c958aa-kube-api-access-cgmtk\") pod \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\" (UID: \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\") " Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.113614 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f8db32-0c68-4c72-9aad-a02ce0c958aa-utilities\") pod \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\" (UID: \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\") " Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.114355 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49f8db32-0c68-4c72-9aad-a02ce0c958aa-utilities" (OuterVolumeSpecName: "utilities") pod "49f8db32-0c68-4c72-9aad-a02ce0c958aa" (UID: "49f8db32-0c68-4c72-9aad-a02ce0c958aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.118433 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49f8db32-0c68-4c72-9aad-a02ce0c958aa-kube-api-access-cgmtk" (OuterVolumeSpecName: "kube-api-access-cgmtk") pod "49f8db32-0c68-4c72-9aad-a02ce0c958aa" (UID: "49f8db32-0c68-4c72-9aad-a02ce0c958aa"). InnerVolumeSpecName "kube-api-access-cgmtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.137135 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49f8db32-0c68-4c72-9aad-a02ce0c958aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49f8db32-0c68-4c72-9aad-a02ce0c958aa" (UID: "49f8db32-0c68-4c72-9aad-a02ce0c958aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.160978 5010 generic.go:334] "Generic (PLEG): container finished" podID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" containerID="435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26" exitCode=0 Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.161034 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.161367 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp7rd" event={"ID":"49f8db32-0c68-4c72-9aad-a02ce0c958aa","Type":"ContainerDied","Data":"435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26"} Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.161493 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp7rd" event={"ID":"49f8db32-0c68-4c72-9aad-a02ce0c958aa","Type":"ContainerDied","Data":"5fb8735def162698d86190ccce3a51a4ca66746325003df2b81d78c40f569048"} Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.161530 5010 scope.go:117] "RemoveContainer" containerID="435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.176118 5010 scope.go:117] "RemoveContainer" containerID="fe10503b93985181eb829a3f8a8e717bf9280acf1b8141cb971cdc624c555ee7" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.186161 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp7rd"] Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.190369 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp7rd"] Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.197545 5010 scope.go:117] "RemoveContainer" containerID="e70831de14dc76fe2d8c698ee95b71e39567c1e454abec34c9a4a5c30f4aa8ee" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.210368 5010 scope.go:117] "RemoveContainer" containerID="435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26" Feb 03 10:05:54 crc kubenswrapper[5010]: E0203 10:05:54.210763 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26\": container with ID starting with 435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26 not found: ID does not exist" containerID="435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.210795 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26"} err="failed to get container status \"435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26\": rpc error: code = NotFound desc = could not find container \"435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26\": container with ID starting with 435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26 not found: ID does not exist" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.210819 5010 scope.go:117] "RemoveContainer" containerID="fe10503b93985181eb829a3f8a8e717bf9280acf1b8141cb971cdc624c555ee7" Feb 03 10:05:54 crc kubenswrapper[5010]: E0203 10:05:54.211109 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe10503b93985181eb829a3f8a8e717bf9280acf1b8141cb971cdc624c555ee7\": container with ID starting with fe10503b93985181eb829a3f8a8e717bf9280acf1b8141cb971cdc624c555ee7 not found: ID does not exist" containerID="fe10503b93985181eb829a3f8a8e717bf9280acf1b8141cb971cdc624c555ee7" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.211138 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe10503b93985181eb829a3f8a8e717bf9280acf1b8141cb971cdc624c555ee7"} err="failed to get container status \"fe10503b93985181eb829a3f8a8e717bf9280acf1b8141cb971cdc624c555ee7\": rpc error: code = NotFound desc = could not find container \"fe10503b93985181eb829a3f8a8e717bf9280acf1b8141cb971cdc624c555ee7\": container with ID starting with fe10503b93985181eb829a3f8a8e717bf9280acf1b8141cb971cdc624c555ee7 not found: ID does not exist" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.211149 5010 scope.go:117] "RemoveContainer" containerID="e70831de14dc76fe2d8c698ee95b71e39567c1e454abec34c9a4a5c30f4aa8ee" Feb 03 10:05:54 crc kubenswrapper[5010]: E0203 10:05:54.211399 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e70831de14dc76fe2d8c698ee95b71e39567c1e454abec34c9a4a5c30f4aa8ee\": container with ID starting with e70831de14dc76fe2d8c698ee95b71e39567c1e454abec34c9a4a5c30f4aa8ee not found: ID does not exist" containerID="e70831de14dc76fe2d8c698ee95b71e39567c1e454abec34c9a4a5c30f4aa8ee" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.211420 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e70831de14dc76fe2d8c698ee95b71e39567c1e454abec34c9a4a5c30f4aa8ee"} err="failed to get container status \"e70831de14dc76fe2d8c698ee95b71e39567c1e454abec34c9a4a5c30f4aa8ee\": rpc error: code = NotFound desc = could not find container \"e70831de14dc76fe2d8c698ee95b71e39567c1e454abec34c9a4a5c30f4aa8ee\": container with ID starting with e70831de14dc76fe2d8c698ee95b71e39567c1e454abec34c9a4a5c30f4aa8ee not found: ID does not exist" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.215415 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f8db32-0c68-4c72-9aad-a02ce0c958aa-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.215454 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f8db32-0c68-4c72-9aad-a02ce0c958aa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.215471 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgmtk\" (UniqueName: \"kubernetes.io/projected/49f8db32-0c68-4c72-9aad-a02ce0c958aa-kube-api-access-cgmtk\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.512665 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" path="/var/lib/kubelet/pods/49f8db32-0c68-4c72-9aad-a02ce0c958aa/volumes" Feb 03 10:06:01 crc kubenswrapper[5010]: I0203 10:06:01.279601 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rkqd6"] Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.703696 5010 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.704470 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b28bac-b8da-4fa7-8282-3b97ef4decac" containerName="extract-utilities" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704486 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b28bac-b8da-4fa7-8282-3b97ef4decac" containerName="extract-utilities" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.704496 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" containerName="registry-server" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704503 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" containerName="registry-server" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.704516 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" containerName="registry-server" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704523 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" containerName="registry-server" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.704535 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" containerName="extract-utilities" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704542 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" containerName="extract-utilities" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.704551 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" containerName="extract-content" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704559 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" containerName="extract-content" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.704570 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" containerName="extract-content" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704577 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" containerName="extract-content" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.704591 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d7a138-50ca-4706-b760-2fe5154b2796" containerName="extract-utilities" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704599 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d7a138-50ca-4706-b760-2fe5154b2796" containerName="extract-utilities" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.704608 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" containerName="extract-utilities" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704616 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" containerName="extract-utilities" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.704626 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d7a138-50ca-4706-b760-2fe5154b2796" containerName="registry-server" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704635 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d7a138-50ca-4706-b760-2fe5154b2796" containerName="registry-server" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.704647 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b28bac-b8da-4fa7-8282-3b97ef4decac" containerName="extract-content" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704654 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b28bac-b8da-4fa7-8282-3b97ef4decac" containerName="extract-content" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.704667 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d7a138-50ca-4706-b760-2fe5154b2796" containerName="extract-content" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704674 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d7a138-50ca-4706-b760-2fe5154b2796" containerName="extract-content" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.704685 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b28bac-b8da-4fa7-8282-3b97ef4decac" containerName="registry-server" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704692 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b28bac-b8da-4fa7-8282-3b97ef4decac" containerName="registry-server" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704811 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="16b28bac-b8da-4fa7-8282-3b97ef4decac" containerName="registry-server" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704828 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" containerName="registry-server" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704837 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" containerName="registry-server" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704849 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d7a138-50ca-4706-b760-2fe5154b2796" containerName="registry-server" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.705159 5010 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.705342 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.705490 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a" gracePeriod=15 Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.705629 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6" gracePeriod=15 Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.705659 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5" gracePeriod=15 Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.705711 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0" gracePeriod=15 Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.705713 5010 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.705759 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d" gracePeriod=15 Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.706185 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706199 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.706239 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706253 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.706265 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706275 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.706288 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706296 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.706307 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706313 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.706330 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706336 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706427 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706441 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706450 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706458 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706465 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.706548 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706554 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706639 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.712172 5010 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.745996 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.746043 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.746142 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.746167 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.746192 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.746269 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.746299 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.746328 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847146 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847198 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847256 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847295 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847328 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847340 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847357 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847427 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847465 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847490 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847507 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847522 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847552 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847401 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847470 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847596 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:19 crc kubenswrapper[5010]: E0203 10:06:19.291225 5010 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:19 crc kubenswrapper[5010]: E0203 10:06:19.291648 5010 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:19 crc kubenswrapper[5010]: E0203 10:06:19.291924 5010 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:19 crc kubenswrapper[5010]: E0203 10:06:19.292142 5010 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:19 crc kubenswrapper[5010]: E0203 10:06:19.292373 5010 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:19 crc kubenswrapper[5010]: I0203 10:06:19.292397 5010 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 03 10:06:19 crc kubenswrapper[5010]: E0203 10:06:19.292604 5010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="200ms" Feb 03 10:06:19 crc kubenswrapper[5010]: I0203 10:06:19.306424 5010 generic.go:334] "Generic (PLEG): container finished" podID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" containerID="8235871772bfab300d8b3a5a6ad3309af90a9d4729dea3e53a02ffdbbd8677af" exitCode=0 Feb 03 10:06:19 crc kubenswrapper[5010]: I0203 10:06:19.306496 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7c4b0e53-f63d-4ccf-a718-389b959a66c4","Type":"ContainerDied","Data":"8235871772bfab300d8b3a5a6ad3309af90a9d4729dea3e53a02ffdbbd8677af"} Feb 03 10:06:19 crc kubenswrapper[5010]: I0203 10:06:19.307171 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:19 crc kubenswrapper[5010]: I0203 10:06:19.309375 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 03 10:06:19 crc kubenswrapper[5010]: I0203 10:06:19.311012 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 10:06:19 crc kubenswrapper[5010]: I0203 10:06:19.311853 5010 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0" exitCode=0 Feb 03 10:06:19 crc kubenswrapper[5010]: I0203 10:06:19.311877 5010 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d" exitCode=0 Feb 03 10:06:19 crc kubenswrapper[5010]: I0203 10:06:19.311887 5010 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5" exitCode=0 Feb 03 10:06:19 crc kubenswrapper[5010]: I0203 10:06:19.311895 5010 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6" exitCode=2 Feb 03 10:06:19 crc kubenswrapper[5010]: I0203 10:06:19.311926 5010 scope.go:117] "RemoveContainer" containerID="8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b" Feb 03 10:06:19 crc kubenswrapper[5010]: E0203 10:06:19.493997 5010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="400ms" Feb 03 10:06:19 crc kubenswrapper[5010]: E0203 10:06:19.895760 5010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="800ms" Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.320878 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.503589 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.556010 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.556595 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.664698 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4b0e53-f63d-4ccf-a718-389b959a66c4-kube-api-access\") pod \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\" (UID: \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\") " Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.664860 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c4b0e53-f63d-4ccf-a718-389b959a66c4-kubelet-dir\") pod \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\" (UID: \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\") " Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.664924 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7c4b0e53-f63d-4ccf-a718-389b959a66c4-var-lock\") pod \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\" (UID: \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\") " Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.664968 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c4b0e53-f63d-4ccf-a718-389b959a66c4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7c4b0e53-f63d-4ccf-a718-389b959a66c4" (UID: "7c4b0e53-f63d-4ccf-a718-389b959a66c4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.665048 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c4b0e53-f63d-4ccf-a718-389b959a66c4-var-lock" (OuterVolumeSpecName: "var-lock") pod "7c4b0e53-f63d-4ccf-a718-389b959a66c4" (UID: "7c4b0e53-f63d-4ccf-a718-389b959a66c4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.665390 5010 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c4b0e53-f63d-4ccf-a718-389b959a66c4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.665428 5010 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7c4b0e53-f63d-4ccf-a718-389b959a66c4-var-lock\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.670154 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c4b0e53-f63d-4ccf-a718-389b959a66c4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7c4b0e53-f63d-4ccf-a718-389b959a66c4" (UID: "7c4b0e53-f63d-4ccf-a718-389b959a66c4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:06:20 crc kubenswrapper[5010]: E0203 10:06:20.696991 5010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="1.6s" Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.777641 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4b0e53-f63d-4ccf-a718-389b959a66c4-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.065153 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.066046 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.066828 5010 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.067553 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.080893 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.080966 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.080992 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.081083 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.081162 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.081169 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.181792 5010 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.181826 5010 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.181838 5010 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.329564 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.329539 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7c4b0e53-f63d-4ccf-a718-389b959a66c4","Type":"ContainerDied","Data":"47e2fb47d49372688a6df246f47c04ec60321886600acbad24a608754f55694c"} Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.330035 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47e2fb47d49372688a6df246f47c04ec60321886600acbad24a608754f55694c" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.332851 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.333631 5010 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a" exitCode=0 Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.333702 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.333703 5010 scope.go:117] "RemoveContainer" containerID="8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.348902 5010 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.349347 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.349610 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.349914 5010 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.353335 5010 scope.go:117] "RemoveContainer" containerID="d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.369874 5010 scope.go:117] "RemoveContainer" containerID="93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.386826 5010 scope.go:117] "RemoveContainer" containerID="2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.400407 5010 scope.go:117] "RemoveContainer" containerID="15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.418923 5010 scope.go:117] "RemoveContainer" containerID="c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.437279 5010 scope.go:117] "RemoveContainer" containerID="8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0" Feb 03 10:06:21 crc kubenswrapper[5010]: E0203 10:06:21.437696 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\": container with ID starting with 8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0 not found: ID does not exist" containerID="8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.437726 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0"} err="failed to get container status \"8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\": rpc error: code = NotFound desc = could not find container \"8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\": container with ID starting with 8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0 not found: ID does not exist" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.437761 5010 scope.go:117] "RemoveContainer" containerID="d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d" Feb 03 10:06:21 crc kubenswrapper[5010]: E0203 10:06:21.438171 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\": container with ID starting with d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d not found: ID does not exist" containerID="d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.438272 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d"} err="failed to get container status \"d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\": rpc error: code = NotFound desc = could not find container \"d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\": container with ID starting with d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d not found: ID does not exist" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.438306 5010 scope.go:117] "RemoveContainer" containerID="93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5" Feb 03 10:06:21 crc kubenswrapper[5010]: E0203 10:06:21.438628 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\": container with ID starting with 93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5 not found: ID does not exist" containerID="93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.438657 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5"} err="failed to get container status \"93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\": rpc error: code = NotFound desc = could not find container \"93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\": container with ID starting with 93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5 not found: ID does not exist" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.438671 5010 scope.go:117] "RemoveContainer" containerID="2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6" Feb 03 10:06:21 crc kubenswrapper[5010]: E0203 10:06:21.438948 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\": container with ID starting with 2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6 not found: ID does not exist" containerID="2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.438979 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6"} err="failed to get container status \"2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\": rpc error: code = NotFound desc = could not find container \"2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\": container with ID starting with 2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6 not found: ID does not exist" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.439001 5010 scope.go:117] "RemoveContainer" containerID="15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a" Feb 03 10:06:21 crc kubenswrapper[5010]: E0203 10:06:21.439334 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\": container with ID starting with 15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a not found: ID does not exist" containerID="15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.439381 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a"} err="failed to get container status \"15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\": rpc error: code = NotFound desc = could not find container \"15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\": container with ID starting with 15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a not found: ID does not exist" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.439413 5010 scope.go:117] "RemoveContainer" containerID="c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709" Feb 03 10:06:21 crc kubenswrapper[5010]: E0203 10:06:21.439882 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\": container with ID starting with c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709 not found: ID does not exist" containerID="c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.439905 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709"} err="failed to get container status \"c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\": rpc error: code = NotFound desc = could not find container \"c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\": container with ID starting with c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709 not found: ID does not exist" Feb 03 10:06:22 crc kubenswrapper[5010]: E0203 10:06:22.297800 5010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="3.2s" Feb 03 10:06:22 crc kubenswrapper[5010]: I0203 10:06:22.507531 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 03 10:06:22 crc kubenswrapper[5010]: E0203 10:06:22.595046 5010 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.58:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" volumeName="registry-storage" Feb 03 10:06:23 crc kubenswrapper[5010]: E0203 10:06:23.752510 5010 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.58:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:23 crc kubenswrapper[5010]: I0203 10:06:23.753659 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:23 crc kubenswrapper[5010]: E0203 10:06:23.788265 5010 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.58:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1890b48febd4ee53 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 10:06:23.786528339 +0000 UTC m=+253.942504508,LastTimestamp:2026-02-03 10:06:23.786528339 +0000 UTC m=+253.942504508,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 10:06:24 crc kubenswrapper[5010]: I0203 10:06:24.354929 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"aafef9981fa7d11562eb0bd58e7300535437ad38c9714ffedb6d952272ad69e5"} Feb 03 10:06:24 crc kubenswrapper[5010]: I0203 10:06:24.355182 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"eceb1cc15ee7168b5595c5db18d300d855c0f2bb643dcd250feb96ade1e832e1"} Feb 03 10:06:24 crc kubenswrapper[5010]: E0203 10:06:24.355771 5010 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.58:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:24 crc kubenswrapper[5010]: I0203 10:06:24.355774 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:25 crc kubenswrapper[5010]: E0203 10:06:25.498917 5010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="6.4s" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.309016 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" podUID="5a475011-4dc0-4490-829a-8016f3b0e8a2" containerName="oauth-openshift" containerID="cri-o://a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a" gracePeriod=15 Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.635984 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.637265 5010 status_manager.go:851] "Failed to get status for pod" podUID="5a475011-4dc0-4490-829a-8016f3b0e8a2" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rkqd6\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.637884 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651446 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-router-certs\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651485 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-error\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651502 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-login\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651539 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-provider-selection\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651574 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-idp-0-file-data\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651593 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-session\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651649 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-cliconfig\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651676 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-audit-policies\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651765 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwhnr\" (UniqueName: \"kubernetes.io/projected/5a475011-4dc0-4490-829a-8016f3b0e8a2-kube-api-access-vwhnr\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651787 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-serving-cert\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651805 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-service-ca\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651836 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-trusted-ca-bundle\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651881 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-ocp-branding-template\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651931 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5a475011-4dc0-4490-829a-8016f3b0e8a2-audit-dir\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.652239 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a475011-4dc0-4490-829a-8016f3b0e8a2-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.653068 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.653191 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.653406 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.656086 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.659262 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a475011-4dc0-4490-829a-8016f3b0e8a2-kube-api-access-vwhnr" (OuterVolumeSpecName: "kube-api-access-vwhnr") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "kube-api-access-vwhnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.659930 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.660494 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.661179 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.662415 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.663003 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.663323 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.664839 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.665180 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753508 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwhnr\" (UniqueName: \"kubernetes.io/projected/5a475011-4dc0-4490-829a-8016f3b0e8a2-kube-api-access-vwhnr\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753573 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753596 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753616 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753637 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753656 5010 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5a475011-4dc0-4490-829a-8016f3b0e8a2-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753680 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753719 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753815 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753835 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753898 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753918 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753936 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.754024 5010 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:27 crc kubenswrapper[5010]: I0203 10:06:27.378595 5010 generic.go:334] "Generic (PLEG): container finished" podID="5a475011-4dc0-4490-829a-8016f3b0e8a2" containerID="a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a" exitCode=0 Feb 03 10:06:27 crc kubenswrapper[5010]: I0203 10:06:27.378659 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" event={"ID":"5a475011-4dc0-4490-829a-8016f3b0e8a2","Type":"ContainerDied","Data":"a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a"} Feb 03 10:06:27 crc kubenswrapper[5010]: I0203 10:06:27.378689 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" event={"ID":"5a475011-4dc0-4490-829a-8016f3b0e8a2","Type":"ContainerDied","Data":"f8f57db6b0062ed4b61ecab8e52afe31f6118dd660c843052c1d2ff893b91694"} Feb 03 10:06:27 crc kubenswrapper[5010]: I0203 10:06:27.378708 5010 scope.go:117] "RemoveContainer" containerID="a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a" Feb 03 10:06:27 crc kubenswrapper[5010]: I0203 10:06:27.378846 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:06:27 crc kubenswrapper[5010]: I0203 10:06:27.380002 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:27 crc kubenswrapper[5010]: I0203 10:06:27.380295 5010 status_manager.go:851] "Failed to get status for pod" podUID="5a475011-4dc0-4490-829a-8016f3b0e8a2" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rkqd6\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:27 crc kubenswrapper[5010]: I0203 10:06:27.398866 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:27 crc kubenswrapper[5010]: I0203 10:06:27.400169 5010 status_manager.go:851] "Failed to get status for pod" podUID="5a475011-4dc0-4490-829a-8016f3b0e8a2" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rkqd6\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:27 crc kubenswrapper[5010]: I0203 10:06:27.401283 5010 scope.go:117] "RemoveContainer" containerID="a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a" Feb 03 10:06:27 crc kubenswrapper[5010]: E0203 10:06:27.401821 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a\": container with ID starting with a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a not found: ID does not exist" containerID="a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a" Feb 03 10:06:27 crc kubenswrapper[5010]: I0203 10:06:27.401898 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a"} err="failed to get container status \"a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a\": rpc error: code = NotFound desc = could not find container \"a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a\": container with ID starting with a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a not found: ID does not exist" Feb 03 10:06:29 crc kubenswrapper[5010]: I0203 10:06:29.501637 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:29 crc kubenswrapper[5010]: I0203 10:06:29.502450 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:29 crc kubenswrapper[5010]: I0203 10:06:29.502958 5010 status_manager.go:851] "Failed to get status for pod" podUID="5a475011-4dc0-4490-829a-8016f3b0e8a2" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rkqd6\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:29 crc kubenswrapper[5010]: I0203 10:06:29.518954 5010 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f83e6949-33d8-4005-aece-aaede1aac552" Feb 03 10:06:29 crc kubenswrapper[5010]: I0203 10:06:29.518993 5010 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f83e6949-33d8-4005-aece-aaede1aac552" Feb 03 10:06:29 crc kubenswrapper[5010]: E0203 10:06:29.519312 5010 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:29 crc kubenswrapper[5010]: I0203 10:06:29.519762 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:29 crc kubenswrapper[5010]: E0203 10:06:29.556617 5010 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.58:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1890b48febd4ee53 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 10:06:23.786528339 +0000 UTC m=+253.942504508,LastTimestamp:2026-02-03 10:06:23.786528339 +0000 UTC m=+253.942504508,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 10:06:30 crc kubenswrapper[5010]: I0203 10:06:30.407384 5010 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="130984c15228b1645c70fac6a3ea0163329e7b05678ff09e7839201026621284" exitCode=0 Feb 03 10:06:30 crc kubenswrapper[5010]: I0203 10:06:30.407518 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"130984c15228b1645c70fac6a3ea0163329e7b05678ff09e7839201026621284"} Feb 03 10:06:30 crc kubenswrapper[5010]: I0203 10:06:30.407723 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e5f4d9ba8915958723475d51778beb169ae52277f2ba92d70897a4962d74ca95"} Feb 03 10:06:30 crc kubenswrapper[5010]: I0203 10:06:30.407989 5010 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f83e6949-33d8-4005-aece-aaede1aac552" Feb 03 10:06:30 crc kubenswrapper[5010]: I0203 10:06:30.408003 5010 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f83e6949-33d8-4005-aece-aaede1aac552" Feb 03 10:06:30 crc kubenswrapper[5010]: E0203 10:06:30.408476 5010 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:30 crc kubenswrapper[5010]: I0203 10:06:30.408485 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:30 crc kubenswrapper[5010]: I0203 10:06:30.408823 5010 status_manager.go:851] "Failed to get status for pod" podUID="5a475011-4dc0-4490-829a-8016f3b0e8a2" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rkqd6\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:30 crc kubenswrapper[5010]: I0203 10:06:30.505963 5010 status_manager.go:851] "Failed to get status for pod" podUID="5a475011-4dc0-4490-829a-8016f3b0e8a2" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rkqd6\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:30 crc kubenswrapper[5010]: I0203 10:06:30.507307 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:30 crc kubenswrapper[5010]: I0203 10:06:30.507641 5010 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:31 crc kubenswrapper[5010]: I0203 10:06:31.422981 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e329c5326b6873d342f471b4c611fb436b3273601897d8e76ca8103b2a975195"} Feb 03 10:06:31 crc kubenswrapper[5010]: I0203 10:06:31.423030 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"81092245012bb637362380c436fbe24d363cd1e8683ab57b019b3091706a06cb"} Feb 03 10:06:31 crc kubenswrapper[5010]: I0203 10:06:31.423045 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ca05718ba8490974414ad3e3834f1f837372bed44286db631e74b158eca5e888"} Feb 03 10:06:31 crc kubenswrapper[5010]: I0203 10:06:31.423058 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4a0a06412578de949f9ce10bb5bf1d6a63e59acc35e22482e168f9f133769da4"} Feb 03 10:06:32 crc kubenswrapper[5010]: I0203 10:06:32.431431 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7e45bb0330e0cf83e4dc82a1b4fbd878697ef55826bdfdacc4ff20265b91488c"} Feb 03 10:06:32 crc kubenswrapper[5010]: I0203 10:06:32.431735 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:32 crc kubenswrapper[5010]: I0203 10:06:32.431763 5010 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f83e6949-33d8-4005-aece-aaede1aac552" Feb 03 10:06:32 crc kubenswrapper[5010]: I0203 10:06:32.431790 5010 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f83e6949-33d8-4005-aece-aaede1aac552" Feb 03 10:06:33 crc kubenswrapper[5010]: I0203 10:06:33.439253 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 03 10:06:33 crc kubenswrapper[5010]: I0203 10:06:33.439741 5010 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624" exitCode=1 Feb 03 10:06:33 crc kubenswrapper[5010]: I0203 10:06:33.439822 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624"} Feb 03 10:06:33 crc kubenswrapper[5010]: I0203 10:06:33.440196 5010 scope.go:117] "RemoveContainer" containerID="0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624" Feb 03 10:06:34 crc kubenswrapper[5010]: I0203 10:06:34.448947 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 03 10:06:34 crc kubenswrapper[5010]: I0203 10:06:34.449327 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5bec1cfba10ca1f56b68d49b130113cc5cdf2727ab40a1341de7e7c13a51daf4"} Feb 03 10:06:34 crc kubenswrapper[5010]: I0203 10:06:34.520538 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:34 crc kubenswrapper[5010]: I0203 10:06:34.520923 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:34 crc kubenswrapper[5010]: I0203 10:06:34.527791 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:35 crc kubenswrapper[5010]: I0203 10:06:35.738367 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:06:37 crc kubenswrapper[5010]: I0203 10:06:37.447199 5010 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:37 crc kubenswrapper[5010]: I0203 10:06:37.475998 5010 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f83e6949-33d8-4005-aece-aaede1aac552" Feb 03 10:06:37 crc kubenswrapper[5010]: I0203 10:06:37.476030 5010 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f83e6949-33d8-4005-aece-aaede1aac552" Feb 03 10:06:37 crc kubenswrapper[5010]: I0203 10:06:37.480337 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:37 crc kubenswrapper[5010]: I0203 10:06:37.494083 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:06:30Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0a06412578de949f9ce10bb5bf1d6a63e59acc35e22482e168f9f133769da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81092245012bb637362380c436fbe24d363cd1e8683ab57b019b3091706a06cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca05718ba8490974414ad3e3834f1f837372bed44286db631e74b158eca5e888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e45bb0330e0cf83e4dc82a1b4fbd878697ef55826bdfdacc4ff20265b91488c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329c5326b6873d342f471b4c611fb436b3273601897d8e76ca8103b2a975195\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:06:31Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://130984c15228b1645c70fac6a3ea0163329e7b05678ff09e7839201026621284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://130984c15228b1645c70fac6a3ea0163329e7b05678ff09e7839201026621284\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}]}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Pod \"kube-apiserver-crc\" is invalid: metadata.uid: Invalid value: \"f83e6949-33d8-4005-aece-aaede1aac552\": field is immutable" Feb 03 10:06:37 crc kubenswrapper[5010]: I0203 10:06:37.562562 5010 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="39895606-8c63-4761-bd8f-01d17ba4215e" Feb 03 10:06:38 crc kubenswrapper[5010]: I0203 10:06:38.482313 5010 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f83e6949-33d8-4005-aece-aaede1aac552" Feb 03 10:06:38 crc kubenswrapper[5010]: I0203 10:06:38.482372 5010 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f83e6949-33d8-4005-aece-aaede1aac552" Feb 03 10:06:38 crc kubenswrapper[5010]: I0203 10:06:38.487381 5010 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="39895606-8c63-4761-bd8f-01d17ba4215e" Feb 03 10:06:40 crc kubenswrapper[5010]: I0203 10:06:40.536472 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:06:40 crc kubenswrapper[5010]: I0203 10:06:40.540383 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:06:43 crc kubenswrapper[5010]: I0203 10:06:43.952419 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 03 10:06:44 crc kubenswrapper[5010]: I0203 10:06:44.016302 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 03 10:06:44 crc kubenswrapper[5010]: I0203 10:06:44.027146 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 03 10:06:44 crc kubenswrapper[5010]: I0203 10:06:44.243120 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 03 10:06:45 crc kubenswrapper[5010]: I0203 10:06:45.495197 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 03 10:06:45 crc kubenswrapper[5010]: I0203 10:06:45.716634 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 03 10:06:45 crc kubenswrapper[5010]: I0203 10:06:45.716637 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 03 10:06:45 crc kubenswrapper[5010]: I0203 10:06:45.748758 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:06:46 crc kubenswrapper[5010]: I0203 10:06:46.502361 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 03 10:06:46 crc kubenswrapper[5010]: I0203 10:06:46.755610 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 03 10:06:46 crc kubenswrapper[5010]: I0203 10:06:46.841577 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 03 10:06:47 crc kubenswrapper[5010]: I0203 10:06:47.406411 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 03 10:06:47 crc kubenswrapper[5010]: I0203 10:06:47.685640 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 03 10:06:48 crc kubenswrapper[5010]: I0203 10:06:48.603752 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 03 10:06:49 crc kubenswrapper[5010]: I0203 10:06:49.296353 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 03 10:06:49 crc kubenswrapper[5010]: I0203 10:06:49.312178 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 03 10:06:49 crc kubenswrapper[5010]: I0203 10:06:49.377854 5010 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 03 10:06:49 crc kubenswrapper[5010]: I0203 10:06:49.384650 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 03 10:06:49 crc kubenswrapper[5010]: I0203 10:06:49.401057 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 03 10:06:49 crc kubenswrapper[5010]: I0203 10:06:49.501597 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 03 10:06:49 crc kubenswrapper[5010]: I0203 10:06:49.669302 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 03 10:06:49 crc kubenswrapper[5010]: I0203 10:06:49.950392 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 03 10:06:50 crc kubenswrapper[5010]: I0203 10:06:50.019207 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 03 10:06:50 crc kubenswrapper[5010]: I0203 10:06:50.108646 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 03 10:06:50 crc kubenswrapper[5010]: I0203 10:06:50.257643 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 03 10:06:50 crc kubenswrapper[5010]: I0203 10:06:50.341479 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 10:06:50 crc kubenswrapper[5010]: I0203 10:06:50.417737 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 03 10:06:50 crc kubenswrapper[5010]: I0203 10:06:50.589515 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 03 10:06:50 crc kubenswrapper[5010]: I0203 10:06:50.916196 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 03 10:06:51 crc kubenswrapper[5010]: I0203 10:06:51.042375 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 03 10:06:51 crc kubenswrapper[5010]: I0203 10:06:51.084511 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 03 10:06:51 crc kubenswrapper[5010]: I0203 10:06:51.418859 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 03 10:06:51 crc kubenswrapper[5010]: I0203 10:06:51.758461 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 03 10:06:51 crc kubenswrapper[5010]: I0203 10:06:51.853803 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 03 10:06:51 crc kubenswrapper[5010]: I0203 10:06:51.897941 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 03 10:06:51 crc kubenswrapper[5010]: I0203 10:06:51.939726 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 03 10:06:51 crc kubenswrapper[5010]: I0203 10:06:51.942677 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 03 10:06:51 crc kubenswrapper[5010]: I0203 10:06:51.955631 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 03 10:06:51 crc kubenswrapper[5010]: I0203 10:06:51.964606 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 03 10:06:52 crc kubenswrapper[5010]: I0203 10:06:52.081007 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 10:06:52 crc kubenswrapper[5010]: I0203 10:06:52.311723 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 03 10:06:52 crc kubenswrapper[5010]: I0203 10:06:52.351915 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 03 10:06:52 crc kubenswrapper[5010]: I0203 10:06:52.424143 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 03 10:06:52 crc kubenswrapper[5010]: I0203 10:06:52.565657 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 03 10:06:52 crc kubenswrapper[5010]: I0203 10:06:52.565656 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 03 10:06:52 crc kubenswrapper[5010]: I0203 10:06:52.645890 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 03 10:06:52 crc kubenswrapper[5010]: I0203 10:06:52.739582 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 03 10:06:52 crc kubenswrapper[5010]: I0203 10:06:52.842892 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.186863 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.191986 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.247649 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.327036 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.328693 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.428947 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.467588 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.511180 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.545040 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.677467 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.700139 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.830243 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.963853 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.011975 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.034584 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.083060 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.118205 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.181977 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.331373 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.347011 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.353941 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.359118 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.387638 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.444624 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.460578 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.468029 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.577687 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.608495 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.871356 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.954645 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.979187 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.053974 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.094735 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.183626 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.270733 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.308727 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.369331 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.397748 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.454012 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.548054 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.577179 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.578620 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.593012 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.602741 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.845710 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.845801 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.847230 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.856571 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.872377 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.025429 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.079677 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.123374 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.161171 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.166795 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.175862 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.296080 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.388763 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.511383 5010 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.532926 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.534282 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.605525 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.651829 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.804559 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.805160 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.926084 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.984659 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.984775 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 03 10:06:57 crc kubenswrapper[5010]: I0203 10:06:57.036537 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 03 10:06:57 crc kubenswrapper[5010]: I0203 10:06:57.068052 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 03 10:06:57 crc kubenswrapper[5010]: I0203 10:06:57.188885 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 03 10:06:57 crc kubenswrapper[5010]: I0203 10:06:57.296073 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 03 10:06:57 crc kubenswrapper[5010]: I0203 10:06:57.311151 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 03 10:06:57 crc kubenswrapper[5010]: I0203 10:06:57.393431 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 03 10:06:57 crc kubenswrapper[5010]: I0203 10:06:57.473192 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 03 10:06:57 crc kubenswrapper[5010]: I0203 10:06:57.600890 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 03 10:06:57 crc kubenswrapper[5010]: I0203 10:06:57.896200 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 03 10:06:57 crc kubenswrapper[5010]: I0203 10:06:57.982360 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.021953 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.024615 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.041536 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.048802 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.176748 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.341283 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.381969 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.423858 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.424658 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.462026 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.485389 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.510245 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.514472 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.741292 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.800516 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.879695 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.885819 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.007440 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.053419 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.072983 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.089087 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.089128 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.100910 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.194744 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.299372 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.326294 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.328275 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.371888 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.432283 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.455373 5010 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.544020 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.626956 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.750810 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.751692 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.768818 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.804295 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.841557 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.891331 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.894203 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.022271 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.064323 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.067547 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.103728 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.108271 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.172990 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.378767 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.477049 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.593515 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.620168 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.627668 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.651357 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.706848 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.788486 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.798093 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.824845 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.862293 5010 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.866068 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-rkqd6"] Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.866118 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-55896b6b9d-9qj5p"] Feb 03 10:07:00 crc kubenswrapper[5010]: E0203 10:07:00.866303 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a475011-4dc0-4490-829a-8016f3b0e8a2" containerName="oauth-openshift" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.866325 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a475011-4dc0-4490-829a-8016f3b0e8a2" containerName="oauth-openshift" Feb 03 10:07:00 crc kubenswrapper[5010]: E0203 10:07:00.866336 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" containerName="installer" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.866354 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" containerName="installer" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.866591 5010 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f83e6949-33d8-4005-aece-aaede1aac552" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.866620 5010 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f83e6949-33d8-4005-aece-aaede1aac552" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.866939 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" containerName="installer" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.866968 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a475011-4dc0-4490-829a-8016f3b0e8a2" containerName="oauth-openshift" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.867662 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.871091 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.871200 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.872566 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.876114 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.876175 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.876323 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.881272 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.881297 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.881358 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.881307 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.881396 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.881538 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.881590 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.887890 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.893049 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.895266 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.895293 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.916195 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.916178181 podStartE2EDuration="23.916178181s" podCreationTimestamp="2026-02-03 10:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:07:00.913566341 +0000 UTC m=+291.069542470" watchObservedRunningTime="2026-02-03 10:07:00.916178181 +0000 UTC m=+291.072154310" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.947126 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.994980 5010 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.019792 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.019837 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-router-certs\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.019866 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.019887 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ed8954d4-a9be-4760-8944-4e7da0eadcab-audit-policies\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.019910 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-user-template-error\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.019929 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b2ct\" (UniqueName: \"kubernetes.io/projected/ed8954d4-a9be-4760-8944-4e7da0eadcab-kube-api-access-8b2ct\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.019945 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.019960 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed8954d4-a9be-4760-8944-4e7da0eadcab-audit-dir\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.019979 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.020001 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-session\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.020018 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-service-ca\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.020034 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.020051 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-user-template-login\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.020071 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121444 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121485 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-router-certs\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121514 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121536 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ed8954d4-a9be-4760-8944-4e7da0eadcab-audit-policies\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121563 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-user-template-error\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121589 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b2ct\" (UniqueName: \"kubernetes.io/projected/ed8954d4-a9be-4760-8944-4e7da0eadcab-kube-api-access-8b2ct\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121618 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121642 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed8954d4-a9be-4760-8944-4e7da0eadcab-audit-dir\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121670 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121700 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-session\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121725 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-service-ca\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121755 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121778 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-user-template-login\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121803 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.122438 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ed8954d4-a9be-4760-8944-4e7da0eadcab-audit-policies\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.122514 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed8954d4-a9be-4760-8944-4e7da0eadcab-audit-dir\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.122820 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.123038 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-service-ca\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.124094 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.126808 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-session\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.127259 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-user-template-login\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.128663 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.128907 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-router-certs\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.129116 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-user-template-error\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.129354 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.130470 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.130729 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.136860 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.138665 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b2ct\" (UniqueName: \"kubernetes.io/projected/ed8954d4-a9be-4760-8944-4e7da0eadcab-kube-api-access-8b2ct\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.194690 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.203102 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.404591 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-55896b6b9d-9qj5p"] Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.462346 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.468437 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.520295 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.538941 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.599345 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" event={"ID":"ed8954d4-a9be-4760-8944-4e7da0eadcab","Type":"ContainerStarted","Data":"f11c9e27c0a8c5d17b1343cd4d162b3a3667b342949536f6b6607f8c8ae493dd"} Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.619407 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.771311 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.798074 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.834885 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.848607 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.937310 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.959274 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.036524 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.257662 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.318180 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.321601 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.356597 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.360866 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.368439 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.495313 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.498336 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.508113 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a475011-4dc0-4490-829a-8016f3b0e8a2" path="/var/lib/kubelet/pods/5a475011-4dc0-4490-829a-8016f3b0e8a2/volumes" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.544149 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.604754 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" event={"ID":"ed8954d4-a9be-4760-8944-4e7da0eadcab","Type":"ContainerStarted","Data":"7b9f6fe6dd230da7bd7852cf9c0b7300054690be522e49d93983867325faf008"} Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.605023 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.610119 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.623326 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" podStartSLOduration=61.623307987 podStartE2EDuration="1m1.623307987s" podCreationTimestamp="2026-02-03 10:06:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:07:02.623145733 +0000 UTC m=+292.779121872" watchObservedRunningTime="2026-02-03 10:07:02.623307987 +0000 UTC m=+292.779284116" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.767231 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.853477 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.882866 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.973342 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 03 10:07:03 crc kubenswrapper[5010]: I0203 10:07:03.117109 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 03 10:07:03 crc kubenswrapper[5010]: I0203 10:07:03.358874 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 03 10:07:03 crc kubenswrapper[5010]: I0203 10:07:03.362181 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 03 10:07:03 crc kubenswrapper[5010]: I0203 10:07:03.407731 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 03 10:07:03 crc kubenswrapper[5010]: I0203 10:07:03.546977 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 10:07:03 crc kubenswrapper[5010]: I0203 10:07:03.608624 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 03 10:07:03 crc kubenswrapper[5010]: I0203 10:07:03.754319 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 03 10:07:03 crc kubenswrapper[5010]: I0203 10:07:03.803854 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 03 10:07:04 crc kubenswrapper[5010]: I0203 10:07:04.071445 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 03 10:07:04 crc kubenswrapper[5010]: I0203 10:07:04.108866 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 03 10:07:04 crc kubenswrapper[5010]: I0203 10:07:04.159187 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 03 10:07:04 crc kubenswrapper[5010]: I0203 10:07:04.395120 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 03 10:07:04 crc kubenswrapper[5010]: I0203 10:07:04.562516 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 03 10:07:04 crc kubenswrapper[5010]: I0203 10:07:04.639629 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 03 10:07:04 crc kubenswrapper[5010]: I0203 10:07:04.715252 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 10:07:04 crc kubenswrapper[5010]: I0203 10:07:04.849163 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 03 10:07:04 crc kubenswrapper[5010]: I0203 10:07:04.878157 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 03 10:07:05 crc kubenswrapper[5010]: I0203 10:07:05.302963 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 03 10:07:05 crc kubenswrapper[5010]: I0203 10:07:05.396092 5010 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 03 10:07:05 crc kubenswrapper[5010]: I0203 10:07:05.435515 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 03 10:07:05 crc kubenswrapper[5010]: I0203 10:07:05.750416 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 10:07:05 crc kubenswrapper[5010]: I0203 10:07:05.956259 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 03 10:07:06 crc kubenswrapper[5010]: I0203 10:07:06.188780 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 03 10:07:06 crc kubenswrapper[5010]: I0203 10:07:06.231648 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 03 10:07:06 crc kubenswrapper[5010]: I0203 10:07:06.888794 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 10:07:07 crc kubenswrapper[5010]: I0203 10:07:07.513668 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 03 10:07:08 crc kubenswrapper[5010]: I0203 10:07:08.058674 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 03 10:07:10 crc kubenswrapper[5010]: I0203 10:07:10.295427 5010 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 03 10:07:11 crc kubenswrapper[5010]: I0203 10:07:11.127851 5010 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 03 10:07:11 crc kubenswrapper[5010]: I0203 10:07:11.128508 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://aafef9981fa7d11562eb0bd58e7300535437ad38c9714ffedb6d952272ad69e5" gracePeriod=5 Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.677260 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.678411 5010 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="aafef9981fa7d11562eb0bd58e7300535437ad38c9714ffedb6d952272ad69e5" exitCode=137 Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.678487 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eceb1cc15ee7168b5595c5db18d300d855c0f2bb643dcd250feb96ade1e832e1" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.693999 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.694083 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735061 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735125 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735142 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735156 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735183 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735259 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735322 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735377 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735374 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735559 5010 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735578 5010 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735592 5010 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735604 5010 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.742498 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.837400 5010 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:17 crc kubenswrapper[5010]: I0203 10:07:17.682319 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:07:18 crc kubenswrapper[5010]: I0203 10:07:18.511691 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.401918 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lc7dd"] Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.402699 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" podUID="e27ae235-3c1c-4ee0-85b6-a53477e335e5" containerName="controller-manager" containerID="cri-o://9193e654b0aae87a0f6cb66b87865bff8d5a0d8845927c6e2ff446174e9141b4" gracePeriod=30 Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.499895 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6"] Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.500138 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" podUID="61153282-2bd6-4bbf-a04a-76909b13f961" containerName="route-controller-manager" containerID="cri-o://815c9a092d4240f3fb7d7c856a7d1fe04289a8f354f5c335fb93d5de0abf1f2c" gracePeriod=30 Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.741837 5010 generic.go:334] "Generic (PLEG): container finished" podID="e27ae235-3c1c-4ee0-85b6-a53477e335e5" containerID="9193e654b0aae87a0f6cb66b87865bff8d5a0d8845927c6e2ff446174e9141b4" exitCode=0 Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.741912 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" event={"ID":"e27ae235-3c1c-4ee0-85b6-a53477e335e5","Type":"ContainerDied","Data":"9193e654b0aae87a0f6cb66b87865bff8d5a0d8845927c6e2ff446174e9141b4"} Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.741941 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" event={"ID":"e27ae235-3c1c-4ee0-85b6-a53477e335e5","Type":"ContainerDied","Data":"8b56ac9ef9b68e183b29025350e04525ecb7ee2dc150d387fdfd29f29126ba81"} Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.741955 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b56ac9ef9b68e183b29025350e04525ecb7ee2dc150d387fdfd29f29126ba81" Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.743745 5010 generic.go:334] "Generic (PLEG): container finished" podID="61153282-2bd6-4bbf-a04a-76909b13f961" containerID="815c9a092d4240f3fb7d7c856a7d1fe04289a8f354f5c335fb93d5de0abf1f2c" exitCode=0 Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.743779 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" event={"ID":"61153282-2bd6-4bbf-a04a-76909b13f961","Type":"ContainerDied","Data":"815c9a092d4240f3fb7d7c856a7d1fe04289a8f354f5c335fb93d5de0abf1f2c"} Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.745473 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.805998 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.902147 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e27ae235-3c1c-4ee0-85b6-a53477e335e5-serving-cert\") pod \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.902300 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzx2n\" (UniqueName: \"kubernetes.io/projected/e27ae235-3c1c-4ee0-85b6-a53477e335e5-kube-api-access-lzx2n\") pod \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.902347 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-proxy-ca-bundles\") pod \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.903348 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-config\") pod \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.903384 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-client-ca\") pod \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.902983 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e27ae235-3c1c-4ee0-85b6-a53477e335e5" (UID: "e27ae235-3c1c-4ee0-85b6-a53477e335e5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.903919 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-client-ca" (OuterVolumeSpecName: "client-ca") pod "e27ae235-3c1c-4ee0-85b6-a53477e335e5" (UID: "e27ae235-3c1c-4ee0-85b6-a53477e335e5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.904471 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-config" (OuterVolumeSpecName: "config") pod "e27ae235-3c1c-4ee0-85b6-a53477e335e5" (UID: "e27ae235-3c1c-4ee0-85b6-a53477e335e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.907199 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27ae235-3c1c-4ee0-85b6-a53477e335e5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e27ae235-3c1c-4ee0-85b6-a53477e335e5" (UID: "e27ae235-3c1c-4ee0-85b6-a53477e335e5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.907333 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e27ae235-3c1c-4ee0-85b6-a53477e335e5-kube-api-access-lzx2n" (OuterVolumeSpecName: "kube-api-access-lzx2n") pod "e27ae235-3c1c-4ee0-85b6-a53477e335e5" (UID: "e27ae235-3c1c-4ee0-85b6-a53477e335e5"). InnerVolumeSpecName "kube-api-access-lzx2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.004075 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61153282-2bd6-4bbf-a04a-76909b13f961-client-ca\") pod \"61153282-2bd6-4bbf-a04a-76909b13f961\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.004137 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61153282-2bd6-4bbf-a04a-76909b13f961-serving-cert\") pod \"61153282-2bd6-4bbf-a04a-76909b13f961\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.004192 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzqxj\" (UniqueName: \"kubernetes.io/projected/61153282-2bd6-4bbf-a04a-76909b13f961-kube-api-access-wzqxj\") pod \"61153282-2bd6-4bbf-a04a-76909b13f961\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.004248 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61153282-2bd6-4bbf-a04a-76909b13f961-config\") pod \"61153282-2bd6-4bbf-a04a-76909b13f961\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.004517 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e27ae235-3c1c-4ee0-85b6-a53477e335e5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.004535 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzx2n\" (UniqueName: \"kubernetes.io/projected/e27ae235-3c1c-4ee0-85b6-a53477e335e5-kube-api-access-lzx2n\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.004548 5010 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.004561 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.004573 5010 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.005039 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61153282-2bd6-4bbf-a04a-76909b13f961-client-ca" (OuterVolumeSpecName: "client-ca") pod "61153282-2bd6-4bbf-a04a-76909b13f961" (UID: "61153282-2bd6-4bbf-a04a-76909b13f961"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.005238 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61153282-2bd6-4bbf-a04a-76909b13f961-config" (OuterVolumeSpecName: "config") pod "61153282-2bd6-4bbf-a04a-76909b13f961" (UID: "61153282-2bd6-4bbf-a04a-76909b13f961"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.007980 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61153282-2bd6-4bbf-a04a-76909b13f961-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "61153282-2bd6-4bbf-a04a-76909b13f961" (UID: "61153282-2bd6-4bbf-a04a-76909b13f961"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.008272 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61153282-2bd6-4bbf-a04a-76909b13f961-kube-api-access-wzqxj" (OuterVolumeSpecName: "kube-api-access-wzqxj") pod "61153282-2bd6-4bbf-a04a-76909b13f961" (UID: "61153282-2bd6-4bbf-a04a-76909b13f961"). InnerVolumeSpecName "kube-api-access-wzqxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.105318 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61153282-2bd6-4bbf-a04a-76909b13f961-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.105359 5010 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61153282-2bd6-4bbf-a04a-76909b13f961-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.105370 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61153282-2bd6-4bbf-a04a-76909b13f961-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.105382 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzqxj\" (UniqueName: \"kubernetes.io/projected/61153282-2bd6-4bbf-a04a-76909b13f961-kube-api-access-wzqxj\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.752828 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.752851 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" event={"ID":"61153282-2bd6-4bbf-a04a-76909b13f961","Type":"ContainerDied","Data":"de6014a42b56ede90300ddd6921cb59d6826d8880dbadae1fda87913014c2ca8"} Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.752874 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.752911 5010 scope.go:117] "RemoveContainer" containerID="815c9a092d4240f3fb7d7c856a7d1fe04289a8f354f5c335fb93d5de0abf1f2c" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.789888 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lc7dd"] Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.792849 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lc7dd"] Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.802079 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6"] Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.805738 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6"] Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.976450 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-556878559b-xhhgj"] Feb 03 10:07:27 crc kubenswrapper[5010]: E0203 10:07:27.976790 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27ae235-3c1c-4ee0-85b6-a53477e335e5" containerName="controller-manager" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.976819 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27ae235-3c1c-4ee0-85b6-a53477e335e5" containerName="controller-manager" Feb 03 10:07:27 crc kubenswrapper[5010]: E0203 10:07:27.976839 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.976851 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 03 10:07:27 crc kubenswrapper[5010]: E0203 10:07:27.977148 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61153282-2bd6-4bbf-a04a-76909b13f961" containerName="route-controller-manager" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.977316 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="61153282-2bd6-4bbf-a04a-76909b13f961" containerName="route-controller-manager" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.977696 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.977740 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="e27ae235-3c1c-4ee0-85b6-a53477e335e5" containerName="controller-manager" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.977762 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="61153282-2bd6-4bbf-a04a-76909b13f961" containerName="route-controller-manager" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.978346 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.981871 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.982377 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.982716 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.982983 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt"] Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.983511 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.983818 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.983937 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.984785 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.990148 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.990320 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.990534 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.990763 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.990849 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.991068 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.999508 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.999848 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-556878559b-xhhgj"] Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.008705 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt"] Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.123621 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/803625af-3cec-45c4-98a2-08da45692f88-client-ca\") pod \"route-controller-manager-df4484484-vwxdt\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.123684 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30cf9a28-0b6e-4cf8-b513-fa463560e886-serving-cert\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.123707 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/803625af-3cec-45c4-98a2-08da45692f88-serving-cert\") pod \"route-controller-manager-df4484484-vwxdt\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.123727 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-proxy-ca-bundles\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.123741 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2rkt\" (UniqueName: \"kubernetes.io/projected/30cf9a28-0b6e-4cf8-b513-fa463560e886-kube-api-access-d2rkt\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.123764 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-config\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.123785 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfglv\" (UniqueName: \"kubernetes.io/projected/803625af-3cec-45c4-98a2-08da45692f88-kube-api-access-jfglv\") pod \"route-controller-manager-df4484484-vwxdt\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.123805 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-client-ca\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.123820 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/803625af-3cec-45c4-98a2-08da45692f88-config\") pod \"route-controller-manager-df4484484-vwxdt\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.224634 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfglv\" (UniqueName: \"kubernetes.io/projected/803625af-3cec-45c4-98a2-08da45692f88-kube-api-access-jfglv\") pod \"route-controller-manager-df4484484-vwxdt\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.225031 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-client-ca\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.225266 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/803625af-3cec-45c4-98a2-08da45692f88-config\") pod \"route-controller-manager-df4484484-vwxdt\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.225612 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/803625af-3cec-45c4-98a2-08da45692f88-client-ca\") pod \"route-controller-manager-df4484484-vwxdt\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.225838 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30cf9a28-0b6e-4cf8-b513-fa463560e886-serving-cert\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.226000 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-client-ca\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.226007 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/803625af-3cec-45c4-98a2-08da45692f88-serving-cert\") pod \"route-controller-manager-df4484484-vwxdt\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.226096 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-proxy-ca-bundles\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.226130 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2rkt\" (UniqueName: \"kubernetes.io/projected/30cf9a28-0b6e-4cf8-b513-fa463560e886-kube-api-access-d2rkt\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.226156 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-config\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.227374 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/803625af-3cec-45c4-98a2-08da45692f88-config\") pod \"route-controller-manager-df4484484-vwxdt\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.227680 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-config\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.228707 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/803625af-3cec-45c4-98a2-08da45692f88-client-ca\") pod \"route-controller-manager-df4484484-vwxdt\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.228924 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-proxy-ca-bundles\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.234996 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/803625af-3cec-45c4-98a2-08da45692f88-serving-cert\") pod \"route-controller-manager-df4484484-vwxdt\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.238160 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30cf9a28-0b6e-4cf8-b513-fa463560e886-serving-cert\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.248722 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfglv\" (UniqueName: \"kubernetes.io/projected/803625af-3cec-45c4-98a2-08da45692f88-kube-api-access-jfglv\") pod \"route-controller-manager-df4484484-vwxdt\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.248862 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2rkt\" (UniqueName: \"kubernetes.io/projected/30cf9a28-0b6e-4cf8-b513-fa463560e886-kube-api-access-d2rkt\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.305941 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.321730 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.510291 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61153282-2bd6-4bbf-a04a-76909b13f961" path="/var/lib/kubelet/pods/61153282-2bd6-4bbf-a04a-76909b13f961/volumes" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.511295 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e27ae235-3c1c-4ee0-85b6-a53477e335e5" path="/var/lib/kubelet/pods/e27ae235-3c1c-4ee0-85b6-a53477e335e5/volumes" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.539761 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-556878559b-xhhgj"] Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.590606 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt"] Feb 03 10:07:28 crc kubenswrapper[5010]: W0203 10:07:28.596356 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod803625af_3cec_45c4_98a2_08da45692f88.slice/crio-00f89a3fa11f161985f39a10a6a8ae129cff868d26ae98a480538d6e0b0ca29f WatchSource:0}: Error finding container 00f89a3fa11f161985f39a10a6a8ae129cff868d26ae98a480538d6e0b0ca29f: Status 404 returned error can't find the container with id 00f89a3fa11f161985f39a10a6a8ae129cff868d26ae98a480538d6e0b0ca29f Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.760178 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" event={"ID":"803625af-3cec-45c4-98a2-08da45692f88","Type":"ContainerStarted","Data":"a2e7d9b77453479a86c7ec92a3e914d2ca2b35e41ce40278a55f958d04f671ca"} Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.760503 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.760519 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" event={"ID":"803625af-3cec-45c4-98a2-08da45692f88","Type":"ContainerStarted","Data":"00f89a3fa11f161985f39a10a6a8ae129cff868d26ae98a480538d6e0b0ca29f"} Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.761707 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" event={"ID":"30cf9a28-0b6e-4cf8-b513-fa463560e886","Type":"ContainerStarted","Data":"8353a44a5500e444d2337a68b2c4782198c30ca7befd61a0c2d9c52c3869471c"} Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.761775 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" event={"ID":"30cf9a28-0b6e-4cf8-b513-fa463560e886","Type":"ContainerStarted","Data":"c78d78b59354c76497fffece8dac6bbcd201b1d7431edbd2dda46259787581a3"} Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.761900 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.762285 5010 patch_prober.go:28] interesting pod/route-controller-manager-df4484484-vwxdt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.762326 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" podUID="803625af-3cec-45c4-98a2-08da45692f88" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.763391 5010 patch_prober.go:28] interesting pod/controller-manager-556878559b-xhhgj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.763428 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" podUID="30cf9a28-0b6e-4cf8-b513-fa463560e886" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.775093 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" podStartSLOduration=2.775078953 podStartE2EDuration="2.775078953s" podCreationTimestamp="2026-02-03 10:07:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:07:28.774109027 +0000 UTC m=+318.930085156" watchObservedRunningTime="2026-02-03 10:07:28.775078953 +0000 UTC m=+318.931055082" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.791782 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" podStartSLOduration=2.791766168 podStartE2EDuration="2.791766168s" podCreationTimestamp="2026-02-03 10:07:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:07:28.791530161 +0000 UTC m=+318.947506290" watchObservedRunningTime="2026-02-03 10:07:28.791766168 +0000 UTC m=+318.947742297" Feb 03 10:07:29 crc kubenswrapper[5010]: I0203 10:07:29.771750 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:29 crc kubenswrapper[5010]: I0203 10:07:29.772507 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:32 crc kubenswrapper[5010]: I0203 10:07:32.575016 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-556878559b-xhhgj"] Feb 03 10:07:32 crc kubenswrapper[5010]: I0203 10:07:32.575572 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" podUID="30cf9a28-0b6e-4cf8-b513-fa463560e886" containerName="controller-manager" containerID="cri-o://8353a44a5500e444d2337a68b2c4782198c30ca7befd61a0c2d9c52c3869471c" gracePeriod=30 Feb 03 10:07:32 crc kubenswrapper[5010]: I0203 10:07:32.598726 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt"] Feb 03 10:07:32 crc kubenswrapper[5010]: I0203 10:07:32.598926 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" podUID="803625af-3cec-45c4-98a2-08da45692f88" containerName="route-controller-manager" containerID="cri-o://a2e7d9b77453479a86c7ec92a3e914d2ca2b35e41ce40278a55f958d04f671ca" gracePeriod=30 Feb 03 10:07:32 crc kubenswrapper[5010]: I0203 10:07:32.787818 5010 generic.go:334] "Generic (PLEG): container finished" podID="30cf9a28-0b6e-4cf8-b513-fa463560e886" containerID="8353a44a5500e444d2337a68b2c4782198c30ca7befd61a0c2d9c52c3869471c" exitCode=0 Feb 03 10:07:32 crc kubenswrapper[5010]: I0203 10:07:32.787906 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" event={"ID":"30cf9a28-0b6e-4cf8-b513-fa463560e886","Type":"ContainerDied","Data":"8353a44a5500e444d2337a68b2c4782198c30ca7befd61a0c2d9c52c3869471c"} Feb 03 10:07:32 crc kubenswrapper[5010]: I0203 10:07:32.791927 5010 generic.go:334] "Generic (PLEG): container finished" podID="803625af-3cec-45c4-98a2-08da45692f88" containerID="a2e7d9b77453479a86c7ec92a3e914d2ca2b35e41ce40278a55f958d04f671ca" exitCode=0 Feb 03 10:07:32 crc kubenswrapper[5010]: I0203 10:07:32.791974 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" event={"ID":"803625af-3cec-45c4-98a2-08da45692f88","Type":"ContainerDied","Data":"a2e7d9b77453479a86c7ec92a3e914d2ca2b35e41ce40278a55f958d04f671ca"} Feb 03 10:07:32 crc kubenswrapper[5010]: I0203 10:07:32.993789 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.087091 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/803625af-3cec-45c4-98a2-08da45692f88-serving-cert\") pod \"803625af-3cec-45c4-98a2-08da45692f88\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.087144 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfglv\" (UniqueName: \"kubernetes.io/projected/803625af-3cec-45c4-98a2-08da45692f88-kube-api-access-jfglv\") pod \"803625af-3cec-45c4-98a2-08da45692f88\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.087175 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/803625af-3cec-45c4-98a2-08da45692f88-client-ca\") pod \"803625af-3cec-45c4-98a2-08da45692f88\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.087271 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/803625af-3cec-45c4-98a2-08da45692f88-config\") pod \"803625af-3cec-45c4-98a2-08da45692f88\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.088364 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/803625af-3cec-45c4-98a2-08da45692f88-config" (OuterVolumeSpecName: "config") pod "803625af-3cec-45c4-98a2-08da45692f88" (UID: "803625af-3cec-45c4-98a2-08da45692f88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.091141 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/803625af-3cec-45c4-98a2-08da45692f88-client-ca" (OuterVolumeSpecName: "client-ca") pod "803625af-3cec-45c4-98a2-08da45692f88" (UID: "803625af-3cec-45c4-98a2-08da45692f88"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.096466 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/803625af-3cec-45c4-98a2-08da45692f88-kube-api-access-jfglv" (OuterVolumeSpecName: "kube-api-access-jfglv") pod "803625af-3cec-45c4-98a2-08da45692f88" (UID: "803625af-3cec-45c4-98a2-08da45692f88"). InnerVolumeSpecName "kube-api-access-jfglv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.096785 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/803625af-3cec-45c4-98a2-08da45692f88-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "803625af-3cec-45c4-98a2-08da45692f88" (UID: "803625af-3cec-45c4-98a2-08da45692f88"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.145741 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.187848 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2rkt\" (UniqueName: \"kubernetes.io/projected/30cf9a28-0b6e-4cf8-b513-fa463560e886-kube-api-access-d2rkt\") pod \"30cf9a28-0b6e-4cf8-b513-fa463560e886\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.187894 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-proxy-ca-bundles\") pod \"30cf9a28-0b6e-4cf8-b513-fa463560e886\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.187927 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-config\") pod \"30cf9a28-0b6e-4cf8-b513-fa463560e886\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.187959 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-client-ca\") pod \"30cf9a28-0b6e-4cf8-b513-fa463560e886\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.187982 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30cf9a28-0b6e-4cf8-b513-fa463560e886-serving-cert\") pod \"30cf9a28-0b6e-4cf8-b513-fa463560e886\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.188125 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/803625af-3cec-45c4-98a2-08da45692f88-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.188137 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfglv\" (UniqueName: \"kubernetes.io/projected/803625af-3cec-45c4-98a2-08da45692f88-kube-api-access-jfglv\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.188146 5010 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/803625af-3cec-45c4-98a2-08da45692f88-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.188154 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/803625af-3cec-45c4-98a2-08da45692f88-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.188628 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "30cf9a28-0b6e-4cf8-b513-fa463560e886" (UID: "30cf9a28-0b6e-4cf8-b513-fa463560e886"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.189171 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-config" (OuterVolumeSpecName: "config") pod "30cf9a28-0b6e-4cf8-b513-fa463560e886" (UID: "30cf9a28-0b6e-4cf8-b513-fa463560e886"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.189508 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-client-ca" (OuterVolumeSpecName: "client-ca") pod "30cf9a28-0b6e-4cf8-b513-fa463560e886" (UID: "30cf9a28-0b6e-4cf8-b513-fa463560e886"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.192408 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30cf9a28-0b6e-4cf8-b513-fa463560e886-kube-api-access-d2rkt" (OuterVolumeSpecName: "kube-api-access-d2rkt") pod "30cf9a28-0b6e-4cf8-b513-fa463560e886" (UID: "30cf9a28-0b6e-4cf8-b513-fa463560e886"). InnerVolumeSpecName "kube-api-access-d2rkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.197379 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30cf9a28-0b6e-4cf8-b513-fa463560e886-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "30cf9a28-0b6e-4cf8-b513-fa463560e886" (UID: "30cf9a28-0b6e-4cf8-b513-fa463560e886"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.289160 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30cf9a28-0b6e-4cf8-b513-fa463560e886-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.289189 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2rkt\" (UniqueName: \"kubernetes.io/projected/30cf9a28-0b6e-4cf8-b513-fa463560e886-kube-api-access-d2rkt\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.289199 5010 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.289210 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.289230 5010 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.797923 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.797916 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" event={"ID":"803625af-3cec-45c4-98a2-08da45692f88","Type":"ContainerDied","Data":"00f89a3fa11f161985f39a10a6a8ae129cff868d26ae98a480538d6e0b0ca29f"} Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.798263 5010 scope.go:117] "RemoveContainer" containerID="a2e7d9b77453479a86c7ec92a3e914d2ca2b35e41ce40278a55f958d04f671ca" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.800141 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" event={"ID":"30cf9a28-0b6e-4cf8-b513-fa463560e886","Type":"ContainerDied","Data":"c78d78b59354c76497fffece8dac6bbcd201b1d7431edbd2dda46259787581a3"} Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.800194 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.815565 5010 scope.go:117] "RemoveContainer" containerID="8353a44a5500e444d2337a68b2c4782198c30ca7befd61a0c2d9c52c3869471c" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.829484 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt"] Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.836175 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt"] Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.841626 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-556878559b-xhhgj"] Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.844860 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-556878559b-xhhgj"] Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.982143 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h"] Feb 03 10:07:33 crc kubenswrapper[5010]: E0203 10:07:33.982415 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="803625af-3cec-45c4-98a2-08da45692f88" containerName="route-controller-manager" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.982429 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="803625af-3cec-45c4-98a2-08da45692f88" containerName="route-controller-manager" Feb 03 10:07:33 crc kubenswrapper[5010]: E0203 10:07:33.982445 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30cf9a28-0b6e-4cf8-b513-fa463560e886" containerName="controller-manager" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.982452 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="30cf9a28-0b6e-4cf8-b513-fa463560e886" containerName="controller-manager" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.982559 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="30cf9a28-0b6e-4cf8-b513-fa463560e886" containerName="controller-manager" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.982568 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="803625af-3cec-45c4-98a2-08da45692f88" containerName="route-controller-manager" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.982912 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.984781 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.984805 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.000090 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.000694 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.002286 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-serving-cert\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.002337 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-client-ca\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.002362 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-config\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.002391 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsqq6\" (UniqueName: \"kubernetes.io/projected/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-kube-api-access-dsqq6\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.002421 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-proxy-ca-bundles\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.002745 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.003146 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.004048 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw"] Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.004534 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.005311 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.013802 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.014255 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.014450 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.014573 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.014685 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.014631 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.016428 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw"] Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.021720 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h"] Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.103060 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-client-ca\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.103109 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-config\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.103143 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsqq6\" (UniqueName: \"kubernetes.io/projected/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-kube-api-access-dsqq6\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.103179 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-config\") pod \"route-controller-manager-bc8d5fc56-6dhjw\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.103198 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjv6z\" (UniqueName: \"kubernetes.io/projected/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-kube-api-access-qjv6z\") pod \"route-controller-manager-bc8d5fc56-6dhjw\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.103222 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-proxy-ca-bundles\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.103491 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-serving-cert\") pod \"route-controller-manager-bc8d5fc56-6dhjw\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.103534 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-client-ca\") pod \"route-controller-manager-bc8d5fc56-6dhjw\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.103694 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-serving-cert\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.104191 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-client-ca\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.104520 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-proxy-ca-bundles\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.105596 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-config\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.122676 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-serving-cert\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.125431 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsqq6\" (UniqueName: \"kubernetes.io/projected/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-kube-api-access-dsqq6\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.204501 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-config\") pod \"route-controller-manager-bc8d5fc56-6dhjw\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.204769 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjv6z\" (UniqueName: \"kubernetes.io/projected/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-kube-api-access-qjv6z\") pod \"route-controller-manager-bc8d5fc56-6dhjw\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.204916 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-serving-cert\") pod \"route-controller-manager-bc8d5fc56-6dhjw\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.205019 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-client-ca\") pod \"route-controller-manager-bc8d5fc56-6dhjw\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.205933 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-client-ca\") pod \"route-controller-manager-bc8d5fc56-6dhjw\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.206560 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-config\") pod \"route-controller-manager-bc8d5fc56-6dhjw\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.210881 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-serving-cert\") pod \"route-controller-manager-bc8d5fc56-6dhjw\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.224062 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjv6z\" (UniqueName: \"kubernetes.io/projected/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-kube-api-access-qjv6z\") pod \"route-controller-manager-bc8d5fc56-6dhjw\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.315168 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.337265 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.510074 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30cf9a28-0b6e-4cf8-b513-fa463560e886" path="/var/lib/kubelet/pods/30cf9a28-0b6e-4cf8-b513-fa463560e886/volumes" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.510806 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="803625af-3cec-45c4-98a2-08da45692f88" path="/var/lib/kubelet/pods/803625af-3cec-45c4-98a2-08da45692f88/volumes" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.551127 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h"] Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.808524 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" event={"ID":"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea","Type":"ContainerStarted","Data":"b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e"} Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.808797 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" event={"ID":"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea","Type":"ContainerStarted","Data":"1c4e6d1216d15486952944a34883d2752e446df691ee61abdfb4affcfd9e809d"} Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.809846 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.811162 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw"] Feb 03 10:07:34 crc kubenswrapper[5010]: W0203 10:07:34.820451 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb13d6ce0_d473_4529_89a4_2e7b8ad864b3.slice/crio-39efd5ea97ac3b2dc44326e763a027b144e99ab980f51894254b44b9a8a1f54d WatchSource:0}: Error finding container 39efd5ea97ac3b2dc44326e763a027b144e99ab980f51894254b44b9a8a1f54d: Status 404 returned error can't find the container with id 39efd5ea97ac3b2dc44326e763a027b144e99ab980f51894254b44b9a8a1f54d Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.821230 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.854655 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" podStartSLOduration=2.854638204 podStartE2EDuration="2.854638204s" podCreationTimestamp="2026-02-03 10:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:07:34.831749673 +0000 UTC m=+324.987725802" watchObservedRunningTime="2026-02-03 10:07:34.854638204 +0000 UTC m=+325.010614333" Feb 03 10:07:35 crc kubenswrapper[5010]: I0203 10:07:35.817854 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" event={"ID":"b13d6ce0-d473-4529-89a4-2e7b8ad864b3","Type":"ContainerStarted","Data":"681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9"} Feb 03 10:07:35 crc kubenswrapper[5010]: I0203 10:07:35.818249 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" event={"ID":"b13d6ce0-d473-4529-89a4-2e7b8ad864b3","Type":"ContainerStarted","Data":"39efd5ea97ac3b2dc44326e763a027b144e99ab980f51894254b44b9a8a1f54d"} Feb 03 10:07:36 crc kubenswrapper[5010]: I0203 10:07:36.823446 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:36 crc kubenswrapper[5010]: I0203 10:07:36.829439 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:36 crc kubenswrapper[5010]: I0203 10:07:36.848431 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" podStartSLOduration=4.848410467 podStartE2EDuration="4.848410467s" podCreationTimestamp="2026-02-03 10:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:07:35.836729465 +0000 UTC m=+325.992705614" watchObservedRunningTime="2026-02-03 10:07:36.848410467 +0000 UTC m=+327.004386606" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.314830 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h"] Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.315373 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" podUID="b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea" containerName="controller-manager" containerID="cri-o://b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e" gracePeriod=30 Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.335004 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw"] Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.733008 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.777624 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsqq6\" (UniqueName: \"kubernetes.io/projected/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-kube-api-access-dsqq6\") pod \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.777715 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-proxy-ca-bundles\") pod \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.777740 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-client-ca\") pod \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.777757 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-serving-cert\") pod \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.777816 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-config\") pod \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.778985 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea" (UID: "b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.779100 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-client-ca" (OuterVolumeSpecName: "client-ca") pod "b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea" (UID: "b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.779353 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-config" (OuterVolumeSpecName: "config") pod "b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea" (UID: "b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.783935 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea" (UID: "b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.784066 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-kube-api-access-dsqq6" (OuterVolumeSpecName: "kube-api-access-dsqq6") pod "b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea" (UID: "b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea"). InnerVolumeSpecName "kube-api-access-dsqq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.835002 5010 generic.go:334] "Generic (PLEG): container finished" podID="b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea" containerID="b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e" exitCode=0 Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.835417 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.835398 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" event={"ID":"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea","Type":"ContainerDied","Data":"b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e"} Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.835785 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" event={"ID":"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea","Type":"ContainerDied","Data":"1c4e6d1216d15486952944a34883d2752e446df691ee61abdfb4affcfd9e809d"} Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.835810 5010 scope.go:117] "RemoveContainer" containerID="b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.852469 5010 scope.go:117] "RemoveContainer" containerID="b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e" Feb 03 10:07:38 crc kubenswrapper[5010]: E0203 10:07:38.853243 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e\": container with ID starting with b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e not found: ID does not exist" containerID="b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.853306 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e"} err="failed to get container status \"b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e\": rpc error: code = NotFound desc = could not find container \"b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e\": container with ID starting with b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e not found: ID does not exist" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.865058 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h"] Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.868934 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h"] Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.878903 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.878938 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsqq6\" (UniqueName: \"kubernetes.io/projected/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-kube-api-access-dsqq6\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.878950 5010 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.878961 5010 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.878972 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:39 crc kubenswrapper[5010]: I0203 10:07:39.840736 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" podUID="b13d6ce0-d473-4529-89a4-2e7b8ad864b3" containerName="route-controller-manager" containerID="cri-o://681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9" gracePeriod=30 Feb 03 10:07:39 crc kubenswrapper[5010]: I0203 10:07:39.991428 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6"] Feb 03 10:07:39 crc kubenswrapper[5010]: E0203 10:07:39.991687 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea" containerName="controller-manager" Feb 03 10:07:39 crc kubenswrapper[5010]: I0203 10:07:39.991704 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea" containerName="controller-manager" Feb 03 10:07:39 crc kubenswrapper[5010]: I0203 10:07:39.991827 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea" containerName="controller-manager" Feb 03 10:07:39 crc kubenswrapper[5010]: I0203 10:07:39.992264 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:39 crc kubenswrapper[5010]: I0203 10:07:39.996984 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 10:07:39 crc kubenswrapper[5010]: I0203 10:07:39.997193 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 10:07:39 crc kubenswrapper[5010]: I0203 10:07:39.998073 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 10:07:39 crc kubenswrapper[5010]: I0203 10:07:39.998201 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 10:07:39 crc kubenswrapper[5010]: I0203 10:07:39.998432 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 10:07:39 crc kubenswrapper[5010]: I0203 10:07:39.998684 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.001131 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6"] Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.001681 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.092363 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-config\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.092678 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws4n7\" (UniqueName: \"kubernetes.io/projected/91761982-f6eb-4427-9ca6-274992d3ecc4-kube-api-access-ws4n7\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.092706 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91761982-f6eb-4427-9ca6-274992d3ecc4-serving-cert\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.092725 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-proxy-ca-bundles\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.092755 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-client-ca\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.197095 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-config\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.197138 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws4n7\" (UniqueName: \"kubernetes.io/projected/91761982-f6eb-4427-9ca6-274992d3ecc4-kube-api-access-ws4n7\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.197182 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91761982-f6eb-4427-9ca6-274992d3ecc4-serving-cert\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.197201 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-proxy-ca-bundles\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.197262 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-client-ca\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.198565 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-proxy-ca-bundles\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.198565 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-client-ca\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.202021 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91761982-f6eb-4427-9ca6-274992d3ecc4-serving-cert\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.202651 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-config\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.215917 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws4n7\" (UniqueName: \"kubernetes.io/projected/91761982-f6eb-4427-9ca6-274992d3ecc4-kube-api-access-ws4n7\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.251099 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.298759 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-serving-cert\") pod \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.298842 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjv6z\" (UniqueName: \"kubernetes.io/projected/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-kube-api-access-qjv6z\") pod \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.298872 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-client-ca\") pod \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.298926 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-config\") pod \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.299769 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-config" (OuterVolumeSpecName: "config") pod "b13d6ce0-d473-4529-89a4-2e7b8ad864b3" (UID: "b13d6ce0-d473-4529-89a4-2e7b8ad864b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.299865 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "b13d6ce0-d473-4529-89a4-2e7b8ad864b3" (UID: "b13d6ce0-d473-4529-89a4-2e7b8ad864b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.302791 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b13d6ce0-d473-4529-89a4-2e7b8ad864b3" (UID: "b13d6ce0-d473-4529-89a4-2e7b8ad864b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.305356 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-kube-api-access-qjv6z" (OuterVolumeSpecName: "kube-api-access-qjv6z") pod "b13d6ce0-d473-4529-89a4-2e7b8ad864b3" (UID: "b13d6ce0-d473-4529-89a4-2e7b8ad864b3"). InnerVolumeSpecName "kube-api-access-qjv6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.330697 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.400780 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.400820 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.400837 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjv6z\" (UniqueName: \"kubernetes.io/projected/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-kube-api-access-qjv6z\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.400855 5010 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.509799 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea" path="/var/lib/kubelet/pods/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea/volumes" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.565277 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6"] Feb 03 10:07:40 crc kubenswrapper[5010]: W0203 10:07:40.566189 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91761982_f6eb_4427_9ca6_274992d3ecc4.slice/crio-05f43ef7831519075585445aeedd267d98d6ff0e1d8a989c20d1a24d5d0d35fd WatchSource:0}: Error finding container 05f43ef7831519075585445aeedd267d98d6ff0e1d8a989c20d1a24d5d0d35fd: Status 404 returned error can't find the container with id 05f43ef7831519075585445aeedd267d98d6ff0e1d8a989c20d1a24d5d0d35fd Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.847731 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" event={"ID":"91761982-f6eb-4427-9ca6-274992d3ecc4","Type":"ContainerStarted","Data":"238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6"} Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.847960 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.847974 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" event={"ID":"91761982-f6eb-4427-9ca6-274992d3ecc4","Type":"ContainerStarted","Data":"05f43ef7831519075585445aeedd267d98d6ff0e1d8a989c20d1a24d5d0d35fd"} Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.848743 5010 generic.go:334] "Generic (PLEG): container finished" podID="b13d6ce0-d473-4529-89a4-2e7b8ad864b3" containerID="681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9" exitCode=0 Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.848766 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" event={"ID":"b13d6ce0-d473-4529-89a4-2e7b8ad864b3","Type":"ContainerDied","Data":"681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9"} Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.848784 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" event={"ID":"b13d6ce0-d473-4529-89a4-2e7b8ad864b3","Type":"ContainerDied","Data":"39efd5ea97ac3b2dc44326e763a027b144e99ab980f51894254b44b9a8a1f54d"} Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.848798 5010 scope.go:117] "RemoveContainer" containerID="681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.848911 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.859045 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.874966 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" podStartSLOduration=2.874946413 podStartE2EDuration="2.874946413s" podCreationTimestamp="2026-02-03 10:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:07:40.871206804 +0000 UTC m=+331.027182933" watchObservedRunningTime="2026-02-03 10:07:40.874946413 +0000 UTC m=+331.030922542" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.877932 5010 scope.go:117] "RemoveContainer" containerID="681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9" Feb 03 10:07:40 crc kubenswrapper[5010]: E0203 10:07:40.878288 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9\": container with ID starting with 681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9 not found: ID does not exist" containerID="681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.878330 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9"} err="failed to get container status \"681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9\": rpc error: code = NotFound desc = could not find container \"681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9\": container with ID starting with 681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9 not found: ID does not exist" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.885927 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw"] Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.888462 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw"] Feb 03 10:07:42 crc kubenswrapper[5010]: I0203 10:07:42.509577 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b13d6ce0-d473-4529-89a4-2e7b8ad864b3" path="/var/lib/kubelet/pods/b13d6ce0-d473-4529-89a4-2e7b8ad864b3/volumes" Feb 03 10:07:42 crc kubenswrapper[5010]: I0203 10:07:42.989920 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz"] Feb 03 10:07:42 crc kubenswrapper[5010]: E0203 10:07:42.990175 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b13d6ce0-d473-4529-89a4-2e7b8ad864b3" containerName="route-controller-manager" Feb 03 10:07:42 crc kubenswrapper[5010]: I0203 10:07:42.990196 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="b13d6ce0-d473-4529-89a4-2e7b8ad864b3" containerName="route-controller-manager" Feb 03 10:07:42 crc kubenswrapper[5010]: I0203 10:07:42.990340 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="b13d6ce0-d473-4529-89a4-2e7b8ad864b3" containerName="route-controller-manager" Feb 03 10:07:42 crc kubenswrapper[5010]: I0203 10:07:42.990805 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:42 crc kubenswrapper[5010]: I0203 10:07:42.992560 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 10:07:42 crc kubenswrapper[5010]: I0203 10:07:42.993029 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 10:07:42 crc kubenswrapper[5010]: I0203 10:07:42.993964 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 10:07:42 crc kubenswrapper[5010]: I0203 10:07:42.994255 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 10:07:42 crc kubenswrapper[5010]: I0203 10:07:42.994462 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 10:07:42 crc kubenswrapper[5010]: I0203 10:07:42.997194 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.001460 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz"] Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.033761 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csb89\" (UniqueName: \"kubernetes.io/projected/8628475b-46cd-4b61-8aa2-d36a3fe3af47-kube-api-access-csb89\") pod \"route-controller-manager-5dcb9544cc-cd6nz\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.033844 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8628475b-46cd-4b61-8aa2-d36a3fe3af47-config\") pod \"route-controller-manager-5dcb9544cc-cd6nz\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.033919 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8628475b-46cd-4b61-8aa2-d36a3fe3af47-serving-cert\") pod \"route-controller-manager-5dcb9544cc-cd6nz\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.033960 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8628475b-46cd-4b61-8aa2-d36a3fe3af47-client-ca\") pod \"route-controller-manager-5dcb9544cc-cd6nz\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.134502 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csb89\" (UniqueName: \"kubernetes.io/projected/8628475b-46cd-4b61-8aa2-d36a3fe3af47-kube-api-access-csb89\") pod \"route-controller-manager-5dcb9544cc-cd6nz\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.134555 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8628475b-46cd-4b61-8aa2-d36a3fe3af47-config\") pod \"route-controller-manager-5dcb9544cc-cd6nz\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.134620 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8628475b-46cd-4b61-8aa2-d36a3fe3af47-serving-cert\") pod \"route-controller-manager-5dcb9544cc-cd6nz\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.134654 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8628475b-46cd-4b61-8aa2-d36a3fe3af47-client-ca\") pod \"route-controller-manager-5dcb9544cc-cd6nz\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.135559 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8628475b-46cd-4b61-8aa2-d36a3fe3af47-client-ca\") pod \"route-controller-manager-5dcb9544cc-cd6nz\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.136671 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8628475b-46cd-4b61-8aa2-d36a3fe3af47-config\") pod \"route-controller-manager-5dcb9544cc-cd6nz\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.141779 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8628475b-46cd-4b61-8aa2-d36a3fe3af47-serving-cert\") pod \"route-controller-manager-5dcb9544cc-cd6nz\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.158583 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csb89\" (UniqueName: \"kubernetes.io/projected/8628475b-46cd-4b61-8aa2-d36a3fe3af47-kube-api-access-csb89\") pod \"route-controller-manager-5dcb9544cc-cd6nz\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.307854 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.698463 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz"] Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.868441 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" event={"ID":"8628475b-46cd-4b61-8aa2-d36a3fe3af47","Type":"ContainerStarted","Data":"94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e"} Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.868827 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" event={"ID":"8628475b-46cd-4b61-8aa2-d36a3fe3af47","Type":"ContainerStarted","Data":"25d16be7d88ebfce6abf5288d6a1be5994b1be679a832ffc963f1662c6ecad64"} Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.869271 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.871005 5010 patch_prober.go:28] interesting pod/route-controller-manager-5dcb9544cc-cd6nz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.871055 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" podUID="8628475b-46cd-4b61-8aa2-d36a3fe3af47" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.911806 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" podStartSLOduration=5.911791624 podStartE2EDuration="5.911791624s" podCreationTimestamp="2026-02-03 10:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:07:43.909900623 +0000 UTC m=+334.065876752" watchObservedRunningTime="2026-02-03 10:07:43.911791624 +0000 UTC m=+334.067767753" Feb 03 10:07:44 crc kubenswrapper[5010]: I0203 10:07:44.880261 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.402587 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6"] Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.403654 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" podUID="91761982-f6eb-4427-9ca6-274992d3ecc4" containerName="controller-manager" containerID="cri-o://238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6" gracePeriod=30 Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.914373 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.935168 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-proxy-ca-bundles\") pod \"91761982-f6eb-4427-9ca6-274992d3ecc4\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.935261 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws4n7\" (UniqueName: \"kubernetes.io/projected/91761982-f6eb-4427-9ca6-274992d3ecc4-kube-api-access-ws4n7\") pod \"91761982-f6eb-4427-9ca6-274992d3ecc4\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.935288 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-config\") pod \"91761982-f6eb-4427-9ca6-274992d3ecc4\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.935352 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91761982-f6eb-4427-9ca6-274992d3ecc4-serving-cert\") pod \"91761982-f6eb-4427-9ca6-274992d3ecc4\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.935428 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-client-ca\") pod \"91761982-f6eb-4427-9ca6-274992d3ecc4\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.936148 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-client-ca" (OuterVolumeSpecName: "client-ca") pod "91761982-f6eb-4427-9ca6-274992d3ecc4" (UID: "91761982-f6eb-4427-9ca6-274992d3ecc4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.936574 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "91761982-f6eb-4427-9ca6-274992d3ecc4" (UID: "91761982-f6eb-4427-9ca6-274992d3ecc4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.937580 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-config" (OuterVolumeSpecName: "config") pod "91761982-f6eb-4427-9ca6-274992d3ecc4" (UID: "91761982-f6eb-4427-9ca6-274992d3ecc4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.956602 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91761982-f6eb-4427-9ca6-274992d3ecc4-kube-api-access-ws4n7" (OuterVolumeSpecName: "kube-api-access-ws4n7") pod "91761982-f6eb-4427-9ca6-274992d3ecc4" (UID: "91761982-f6eb-4427-9ca6-274992d3ecc4"). InnerVolumeSpecName "kube-api-access-ws4n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.961649 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91761982-f6eb-4427-9ca6-274992d3ecc4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "91761982-f6eb-4427-9ca6-274992d3ecc4" (UID: "91761982-f6eb-4427-9ca6-274992d3ecc4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.037065 5010 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.037108 5010 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.037126 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws4n7\" (UniqueName: \"kubernetes.io/projected/91761982-f6eb-4427-9ca6-274992d3ecc4-kube-api-access-ws4n7\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.037139 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.037151 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91761982-f6eb-4427-9ca6-274992d3ecc4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.193068 5010 generic.go:334] "Generic (PLEG): container finished" podID="91761982-f6eb-4427-9ca6-274992d3ecc4" containerID="238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6" exitCode=0 Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.193117 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" event={"ID":"91761982-f6eb-4427-9ca6-274992d3ecc4","Type":"ContainerDied","Data":"238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6"} Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.193129 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.193150 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" event={"ID":"91761982-f6eb-4427-9ca6-274992d3ecc4","Type":"ContainerDied","Data":"05f43ef7831519075585445aeedd267d98d6ff0e1d8a989c20d1a24d5d0d35fd"} Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.193175 5010 scope.go:117] "RemoveContainer" containerID="238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6" Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.212369 5010 scope.go:117] "RemoveContainer" containerID="238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6" Feb 03 10:08:07 crc kubenswrapper[5010]: E0203 10:08:07.212967 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6\": container with ID starting with 238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6 not found: ID does not exist" containerID="238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6" Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.213115 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6"} err="failed to get container status \"238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6\": rpc error: code = NotFound desc = could not find container \"238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6\": container with ID starting with 238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6 not found: ID does not exist" Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.237934 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6"] Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.243855 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6"] Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.002333 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q"] Feb 03 10:08:08 crc kubenswrapper[5010]: E0203 10:08:08.002553 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91761982-f6eb-4427-9ca6-274992d3ecc4" containerName="controller-manager" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.002567 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="91761982-f6eb-4427-9ca6-274992d3ecc4" containerName="controller-manager" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.002668 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="91761982-f6eb-4427-9ca6-274992d3ecc4" containerName="controller-manager" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.003030 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.005538 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.005872 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.006058 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.006339 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.006517 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.006770 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.014357 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q"] Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.016509 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.048419 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-serving-cert\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.048518 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd4qh\" (UniqueName: \"kubernetes.io/projected/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-kube-api-access-bd4qh\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.048557 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-proxy-ca-bundles\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.048651 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-client-ca\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.048681 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-config\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.149872 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-client-ca\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.150125 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-config\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.150148 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-serving-cert\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.150193 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd4qh\" (UniqueName: \"kubernetes.io/projected/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-kube-api-access-bd4qh\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.150230 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-proxy-ca-bundles\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.151058 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-proxy-ca-bundles\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.151141 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-client-ca\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.152265 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-config\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.156321 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-serving-cert\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.173228 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd4qh\" (UniqueName: \"kubernetes.io/projected/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-kube-api-access-bd4qh\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.385136 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.508723 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91761982-f6eb-4427-9ca6-274992d3ecc4" path="/var/lib/kubelet/pods/91761982-f6eb-4427-9ca6-274992d3ecc4/volumes" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.768823 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q"] Feb 03 10:08:09 crc kubenswrapper[5010]: I0203 10:08:09.204598 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" event={"ID":"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2","Type":"ContainerStarted","Data":"28248090b01669d75346e2b8e920ede3336868da0bf379c5facee834ccde111b"} Feb 03 10:08:09 crc kubenswrapper[5010]: I0203 10:08:09.204648 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" event={"ID":"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2","Type":"ContainerStarted","Data":"b970ef5622cdee8c14fcff17c79ffb9eac42f7837974356dad930d3ef4056e23"} Feb 03 10:08:09 crc kubenswrapper[5010]: I0203 10:08:09.204830 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:09 crc kubenswrapper[5010]: I0203 10:08:09.224498 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" podStartSLOduration=3.224476995 podStartE2EDuration="3.224476995s" podCreationTimestamp="2026-02-03 10:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:08:09.222472592 +0000 UTC m=+359.378448721" watchObservedRunningTime="2026-02-03 10:08:09.224476995 +0000 UTC m=+359.380453124" Feb 03 10:08:09 crc kubenswrapper[5010]: I0203 10:08:09.229298 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:16 crc kubenswrapper[5010]: I0203 10:08:16.389903 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:08:16 crc kubenswrapper[5010]: I0203 10:08:16.390679 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:08:26 crc kubenswrapper[5010]: I0203 10:08:26.408983 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz"] Feb 03 10:08:26 crc kubenswrapper[5010]: I0203 10:08:26.410256 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" podUID="8628475b-46cd-4b61-8aa2-d36a3fe3af47" containerName="route-controller-manager" containerID="cri-o://94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e" gracePeriod=30 Feb 03 10:08:26 crc kubenswrapper[5010]: I0203 10:08:26.886813 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.004658 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csb89\" (UniqueName: \"kubernetes.io/projected/8628475b-46cd-4b61-8aa2-d36a3fe3af47-kube-api-access-csb89\") pod \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.006129 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8628475b-46cd-4b61-8aa2-d36a3fe3af47-serving-cert\") pod \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.006329 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8628475b-46cd-4b61-8aa2-d36a3fe3af47-client-ca\") pod \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.006999 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8628475b-46cd-4b61-8aa2-d36a3fe3af47-client-ca" (OuterVolumeSpecName: "client-ca") pod "8628475b-46cd-4b61-8aa2-d36a3fe3af47" (UID: "8628475b-46cd-4b61-8aa2-d36a3fe3af47"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.007303 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8628475b-46cd-4b61-8aa2-d36a3fe3af47-config\") pod \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.007911 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8628475b-46cd-4b61-8aa2-d36a3fe3af47-config" (OuterVolumeSpecName: "config") pod "8628475b-46cd-4b61-8aa2-d36a3fe3af47" (UID: "8628475b-46cd-4b61-8aa2-d36a3fe3af47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.009790 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8628475b-46cd-4b61-8aa2-d36a3fe3af47-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.010133 5010 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8628475b-46cd-4b61-8aa2-d36a3fe3af47-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.014376 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8628475b-46cd-4b61-8aa2-d36a3fe3af47-kube-api-access-csb89" (OuterVolumeSpecName: "kube-api-access-csb89") pod "8628475b-46cd-4b61-8aa2-d36a3fe3af47" (UID: "8628475b-46cd-4b61-8aa2-d36a3fe3af47"). InnerVolumeSpecName "kube-api-access-csb89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.015161 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8628475b-46cd-4b61-8aa2-d36a3fe3af47-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8628475b-46cd-4b61-8aa2-d36a3fe3af47" (UID: "8628475b-46cd-4b61-8aa2-d36a3fe3af47"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.111946 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csb89\" (UniqueName: \"kubernetes.io/projected/8628475b-46cd-4b61-8aa2-d36a3fe3af47-kube-api-access-csb89\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.111982 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8628475b-46cd-4b61-8aa2-d36a3fe3af47-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.300928 5010 generic.go:334] "Generic (PLEG): container finished" podID="8628475b-46cd-4b61-8aa2-d36a3fe3af47" containerID="94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e" exitCode=0 Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.300993 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.301012 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" event={"ID":"8628475b-46cd-4b61-8aa2-d36a3fe3af47","Type":"ContainerDied","Data":"94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e"} Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.302023 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" event={"ID":"8628475b-46cd-4b61-8aa2-d36a3fe3af47","Type":"ContainerDied","Data":"25d16be7d88ebfce6abf5288d6a1be5994b1be679a832ffc963f1662c6ecad64"} Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.302102 5010 scope.go:117] "RemoveContainer" containerID="94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.338094 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz"] Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.344108 5010 scope.go:117] "RemoveContainer" containerID="94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e" Feb 03 10:08:27 crc kubenswrapper[5010]: E0203 10:08:27.344988 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e\": container with ID starting with 94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e not found: ID does not exist" containerID="94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.345072 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e"} err="failed to get container status \"94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e\": rpc error: code = NotFound desc = could not find container \"94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e\": container with ID starting with 94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e not found: ID does not exist" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.349365 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz"] Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.012414 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9"] Feb 03 10:08:28 crc kubenswrapper[5010]: E0203 10:08:28.013857 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8628475b-46cd-4b61-8aa2-d36a3fe3af47" containerName="route-controller-manager" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.013943 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="8628475b-46cd-4b61-8aa2-d36a3fe3af47" containerName="route-controller-manager" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.014159 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="8628475b-46cd-4b61-8aa2-d36a3fe3af47" containerName="route-controller-manager" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.014940 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.016856 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.017152 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.017326 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.017686 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.019066 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.020096 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.025809 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9"] Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.127078 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58aa9ea0-6947-49a1-80ca-71542cbdd2df-client-ca\") pod \"route-controller-manager-bc8d5fc56-ch7b9\" (UID: \"58aa9ea0-6947-49a1-80ca-71542cbdd2df\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.127156 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58aa9ea0-6947-49a1-80ca-71542cbdd2df-config\") pod \"route-controller-manager-bc8d5fc56-ch7b9\" (UID: \"58aa9ea0-6947-49a1-80ca-71542cbdd2df\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.127577 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmlmb\" (UniqueName: \"kubernetes.io/projected/58aa9ea0-6947-49a1-80ca-71542cbdd2df-kube-api-access-gmlmb\") pod \"route-controller-manager-bc8d5fc56-ch7b9\" (UID: \"58aa9ea0-6947-49a1-80ca-71542cbdd2df\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.127646 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58aa9ea0-6947-49a1-80ca-71542cbdd2df-serving-cert\") pod \"route-controller-manager-bc8d5fc56-ch7b9\" (UID: \"58aa9ea0-6947-49a1-80ca-71542cbdd2df\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.229005 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmlmb\" (UniqueName: \"kubernetes.io/projected/58aa9ea0-6947-49a1-80ca-71542cbdd2df-kube-api-access-gmlmb\") pod \"route-controller-manager-bc8d5fc56-ch7b9\" (UID: \"58aa9ea0-6947-49a1-80ca-71542cbdd2df\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.229086 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58aa9ea0-6947-49a1-80ca-71542cbdd2df-serving-cert\") pod \"route-controller-manager-bc8d5fc56-ch7b9\" (UID: \"58aa9ea0-6947-49a1-80ca-71542cbdd2df\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.229170 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58aa9ea0-6947-49a1-80ca-71542cbdd2df-client-ca\") pod \"route-controller-manager-bc8d5fc56-ch7b9\" (UID: \"58aa9ea0-6947-49a1-80ca-71542cbdd2df\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.229255 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58aa9ea0-6947-49a1-80ca-71542cbdd2df-config\") pod \"route-controller-manager-bc8d5fc56-ch7b9\" (UID: \"58aa9ea0-6947-49a1-80ca-71542cbdd2df\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.230356 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58aa9ea0-6947-49a1-80ca-71542cbdd2df-client-ca\") pod \"route-controller-manager-bc8d5fc56-ch7b9\" (UID: \"58aa9ea0-6947-49a1-80ca-71542cbdd2df\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.230515 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58aa9ea0-6947-49a1-80ca-71542cbdd2df-config\") pod \"route-controller-manager-bc8d5fc56-ch7b9\" (UID: \"58aa9ea0-6947-49a1-80ca-71542cbdd2df\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.240503 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58aa9ea0-6947-49a1-80ca-71542cbdd2df-serving-cert\") pod \"route-controller-manager-bc8d5fc56-ch7b9\" (UID: \"58aa9ea0-6947-49a1-80ca-71542cbdd2df\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.248787 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmlmb\" (UniqueName: \"kubernetes.io/projected/58aa9ea0-6947-49a1-80ca-71542cbdd2df-kube-api-access-gmlmb\") pod \"route-controller-manager-bc8d5fc56-ch7b9\" (UID: \"58aa9ea0-6947-49a1-80ca-71542cbdd2df\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.366322 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.510308 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8628475b-46cd-4b61-8aa2-d36a3fe3af47" path="/var/lib/kubelet/pods/8628475b-46cd-4b61-8aa2-d36a3fe3af47/volumes" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.787327 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9"] Feb 03 10:08:28 crc kubenswrapper[5010]: W0203 10:08:28.799653 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58aa9ea0_6947_49a1_80ca_71542cbdd2df.slice/crio-adac1350597f4d37be65c4cdb5d880f9ec298abe66b167d4cb606e3c20877c1c WatchSource:0}: Error finding container adac1350597f4d37be65c4cdb5d880f9ec298abe66b167d4cb606e3c20877c1c: Status 404 returned error can't find the container with id adac1350597f4d37be65c4cdb5d880f9ec298abe66b167d4cb606e3c20877c1c Feb 03 10:08:29 crc kubenswrapper[5010]: I0203 10:08:29.313929 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" event={"ID":"58aa9ea0-6947-49a1-80ca-71542cbdd2df","Type":"ContainerStarted","Data":"90d97ef6e79f118ad5af9abaddf5b989d898abbb9509c202bb53eceef6ac6be3"} Feb 03 10:08:29 crc kubenswrapper[5010]: I0203 10:08:29.314295 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" event={"ID":"58aa9ea0-6947-49a1-80ca-71542cbdd2df","Type":"ContainerStarted","Data":"adac1350597f4d37be65c4cdb5d880f9ec298abe66b167d4cb606e3c20877c1c"} Feb 03 10:08:29 crc kubenswrapper[5010]: I0203 10:08:29.314712 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:29 crc kubenswrapper[5010]: I0203 10:08:29.323838 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:29 crc kubenswrapper[5010]: I0203 10:08:29.342232 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" podStartSLOduration=3.342199169 podStartE2EDuration="3.342199169s" podCreationTimestamp="2026-02-03 10:08:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:08:29.339053715 +0000 UTC m=+379.495029864" watchObservedRunningTime="2026-02-03 10:08:29.342199169 +0000 UTC m=+379.498175298" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.173816 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fgqs4"] Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.175177 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.195631 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fgqs4"] Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.268473 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx798\" (UniqueName: \"kubernetes.io/projected/72291d2a-e172-4670-9df7-c4de79cab1a1-kube-api-access-gx798\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.268561 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/72291d2a-e172-4670-9df7-c4de79cab1a1-registry-tls\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.268718 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/72291d2a-e172-4670-9df7-c4de79cab1a1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.268781 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/72291d2a-e172-4670-9df7-c4de79cab1a1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.268805 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72291d2a-e172-4670-9df7-c4de79cab1a1-trusted-ca\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.268957 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.268999 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/72291d2a-e172-4670-9df7-c4de79cab1a1-registry-certificates\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.269038 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/72291d2a-e172-4670-9df7-c4de79cab1a1-bound-sa-token\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.290982 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.370594 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/72291d2a-e172-4670-9df7-c4de79cab1a1-registry-certificates\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.370640 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/72291d2a-e172-4670-9df7-c4de79cab1a1-bound-sa-token\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.370692 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx798\" (UniqueName: \"kubernetes.io/projected/72291d2a-e172-4670-9df7-c4de79cab1a1-kube-api-access-gx798\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.370742 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/72291d2a-e172-4670-9df7-c4de79cab1a1-registry-tls\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.370782 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/72291d2a-e172-4670-9df7-c4de79cab1a1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.370802 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/72291d2a-e172-4670-9df7-c4de79cab1a1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.370817 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72291d2a-e172-4670-9df7-c4de79cab1a1-trusted-ca\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.372002 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/72291d2a-e172-4670-9df7-c4de79cab1a1-registry-certificates\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.372157 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72291d2a-e172-4670-9df7-c4de79cab1a1-trusted-ca\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.372298 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/72291d2a-e172-4670-9df7-c4de79cab1a1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.376770 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/72291d2a-e172-4670-9df7-c4de79cab1a1-registry-tls\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.382881 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/72291d2a-e172-4670-9df7-c4de79cab1a1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.387647 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx798\" (UniqueName: \"kubernetes.io/projected/72291d2a-e172-4670-9df7-c4de79cab1a1-kube-api-access-gx798\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.391897 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/72291d2a-e172-4670-9df7-c4de79cab1a1-bound-sa-token\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.400269 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rhsmk"] Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.400500 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rhsmk" podUID="6b321403-09c3-4199-98ce-474deeea9d18" containerName="registry-server" containerID="cri-o://3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f" gracePeriod=30 Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.420433 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f8ldc"] Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.420763 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f8ldc" podUID="5a09b802-00fe-4ff8-983e-58c495061478" containerName="registry-server" containerID="cri-o://6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac" gracePeriod=30 Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.435294 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6kg4f"] Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.435551 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" podUID="1b5592be-8839-4660-a4c4-ab662fc975eb" containerName="marketplace-operator" containerID="cri-o://a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b" gracePeriod=30 Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.452687 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w967c"] Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.453277 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w967c" podUID="778b346c-f503-4364-9757-98c213d89edc" containerName="registry-server" containerID="cri-o://d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08" gracePeriod=30 Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.454367 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lskbc"] Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.455469 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.462182 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5pgxf"] Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.462413 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5pgxf" podUID="777b0b1e-96c3-4914-8b7b-d51186433cb7" containerName="registry-server" containerID="cri-o://64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b" gracePeriod=30 Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.466024 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lskbc"] Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.492197 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.572919 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lskbc\" (UID: \"a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.572985 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lskbc\" (UID: \"a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.573275 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9j8h\" (UniqueName: \"kubernetes.io/projected/a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe-kube-api-access-q9j8h\") pod \"marketplace-operator-79b997595-lskbc\" (UID: \"a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.674728 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9j8h\" (UniqueName: \"kubernetes.io/projected/a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe-kube-api-access-q9j8h\") pod \"marketplace-operator-79b997595-lskbc\" (UID: \"a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.674814 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lskbc\" (UID: \"a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.674838 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lskbc\" (UID: \"a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.680197 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lskbc\" (UID: \"a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.681629 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lskbc\" (UID: \"a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.697004 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9j8h\" (UniqueName: \"kubernetes.io/projected/a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe-kube-api-access-q9j8h\") pod \"marketplace-operator-79b997595-lskbc\" (UID: \"a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.762667 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:31 crc kubenswrapper[5010]: E0203 10:08:31.835822 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b is running failed: container process not found" containerID="64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b" cmd=["grpc_health_probe","-addr=:50051"] Feb 03 10:08:31 crc kubenswrapper[5010]: E0203 10:08:31.837776 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b is running failed: container process not found" containerID="64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b" cmd=["grpc_health_probe","-addr=:50051"] Feb 03 10:08:31 crc kubenswrapper[5010]: E0203 10:08:31.838805 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b is running failed: container process not found" containerID="64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b" cmd=["grpc_health_probe","-addr=:50051"] Feb 03 10:08:31 crc kubenswrapper[5010]: E0203 10:08:31.838839 5010 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-5pgxf" podUID="777b0b1e-96c3-4914-8b7b-d51186433cb7" containerName="registry-server" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.982015 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.081011 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b321403-09c3-4199-98ce-474deeea9d18-catalog-content\") pod \"6b321403-09c3-4199-98ce-474deeea9d18\" (UID: \"6b321403-09c3-4199-98ce-474deeea9d18\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.081050 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rkwl\" (UniqueName: \"kubernetes.io/projected/6b321403-09c3-4199-98ce-474deeea9d18-kube-api-access-8rkwl\") pod \"6b321403-09c3-4199-98ce-474deeea9d18\" (UID: \"6b321403-09c3-4199-98ce-474deeea9d18\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.081173 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b321403-09c3-4199-98ce-474deeea9d18-utilities\") pod \"6b321403-09c3-4199-98ce-474deeea9d18\" (UID: \"6b321403-09c3-4199-98ce-474deeea9d18\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.082276 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b321403-09c3-4199-98ce-474deeea9d18-utilities" (OuterVolumeSpecName: "utilities") pod "6b321403-09c3-4199-98ce-474deeea9d18" (UID: "6b321403-09c3-4199-98ce-474deeea9d18"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.099157 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b321403-09c3-4199-98ce-474deeea9d18-kube-api-access-8rkwl" (OuterVolumeSpecName: "kube-api-access-8rkwl") pod "6b321403-09c3-4199-98ce-474deeea9d18" (UID: "6b321403-09c3-4199-98ce-474deeea9d18"). InnerVolumeSpecName "kube-api-access-8rkwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.150401 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b321403-09c3-4199-98ce-474deeea9d18-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b321403-09c3-4199-98ce-474deeea9d18" (UID: "6b321403-09c3-4199-98ce-474deeea9d18"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.182862 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b321403-09c3-4199-98ce-474deeea9d18-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.182913 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b321403-09c3-4199-98ce-474deeea9d18-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.182932 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rkwl\" (UniqueName: \"kubernetes.io/projected/6b321403-09c3-4199-98ce-474deeea9d18-kube-api-access-8rkwl\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.199894 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.206239 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.212437 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.255193 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.257095 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fgqs4"] Feb 03 10:08:32 crc kubenswrapper[5010]: W0203 10:08:32.269434 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72291d2a_e172_4670_9df7_c4de79cab1a1.slice/crio-bdcfcd819707a008d216ee28c8a59fdebeca7cc15a6cf4579f372782cccc49dd WatchSource:0}: Error finding container bdcfcd819707a008d216ee28c8a59fdebeca7cc15a6cf4579f372782cccc49dd: Status 404 returned error can't find the container with id bdcfcd819707a008d216ee28c8a59fdebeca7cc15a6cf4579f372782cccc49dd Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.287311 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-operator-metrics\") pod \"1b5592be-8839-4660-a4c4-ab662fc975eb\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.287625 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a09b802-00fe-4ff8-983e-58c495061478-utilities\") pod \"5a09b802-00fe-4ff8-983e-58c495061478\" (UID: \"5a09b802-00fe-4ff8-983e-58c495061478\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.287678 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmnts\" (UniqueName: \"kubernetes.io/projected/1b5592be-8839-4660-a4c4-ab662fc975eb-kube-api-access-pmnts\") pod \"1b5592be-8839-4660-a4c4-ab662fc975eb\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.287702 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a09b802-00fe-4ff8-983e-58c495061478-catalog-content\") pod \"5a09b802-00fe-4ff8-983e-58c495061478\" (UID: \"5a09b802-00fe-4ff8-983e-58c495061478\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.287719 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/778b346c-f503-4364-9757-98c213d89edc-utilities\") pod \"778b346c-f503-4364-9757-98c213d89edc\" (UID: \"778b346c-f503-4364-9757-98c213d89edc\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.287744 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw58w\" (UniqueName: \"kubernetes.io/projected/778b346c-f503-4364-9757-98c213d89edc-kube-api-access-mw58w\") pod \"778b346c-f503-4364-9757-98c213d89edc\" (UID: \"778b346c-f503-4364-9757-98c213d89edc\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.287759 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjvqs\" (UniqueName: \"kubernetes.io/projected/5a09b802-00fe-4ff8-983e-58c495061478-kube-api-access-vjvqs\") pod \"5a09b802-00fe-4ff8-983e-58c495061478\" (UID: \"5a09b802-00fe-4ff8-983e-58c495061478\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.287788 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/778b346c-f503-4364-9757-98c213d89edc-catalog-content\") pod \"778b346c-f503-4364-9757-98c213d89edc\" (UID: \"778b346c-f503-4364-9757-98c213d89edc\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.287842 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-trusted-ca\") pod \"1b5592be-8839-4660-a4c4-ab662fc975eb\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.289149 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "1b5592be-8839-4660-a4c4-ab662fc975eb" (UID: "1b5592be-8839-4660-a4c4-ab662fc975eb"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.289176 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a09b802-00fe-4ff8-983e-58c495061478-utilities" (OuterVolumeSpecName: "utilities") pod "5a09b802-00fe-4ff8-983e-58c495061478" (UID: "5a09b802-00fe-4ff8-983e-58c495061478"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.289414 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/778b346c-f503-4364-9757-98c213d89edc-utilities" (OuterVolumeSpecName: "utilities") pod "778b346c-f503-4364-9757-98c213d89edc" (UID: "778b346c-f503-4364-9757-98c213d89edc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.291558 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b5592be-8839-4660-a4c4-ab662fc975eb-kube-api-access-pmnts" (OuterVolumeSpecName: "kube-api-access-pmnts") pod "1b5592be-8839-4660-a4c4-ab662fc975eb" (UID: "1b5592be-8839-4660-a4c4-ab662fc975eb"). InnerVolumeSpecName "kube-api-access-pmnts". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.291577 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "1b5592be-8839-4660-a4c4-ab662fc975eb" (UID: "1b5592be-8839-4660-a4c4-ab662fc975eb"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.296226 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a09b802-00fe-4ff8-983e-58c495061478-kube-api-access-vjvqs" (OuterVolumeSpecName: "kube-api-access-vjvqs") pod "5a09b802-00fe-4ff8-983e-58c495061478" (UID: "5a09b802-00fe-4ff8-983e-58c495061478"). InnerVolumeSpecName "kube-api-access-vjvqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.297141 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/778b346c-f503-4364-9757-98c213d89edc-kube-api-access-mw58w" (OuterVolumeSpecName: "kube-api-access-mw58w") pod "778b346c-f503-4364-9757-98c213d89edc" (UID: "778b346c-f503-4364-9757-98c213d89edc"). InnerVolumeSpecName "kube-api-access-mw58w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.329845 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/778b346c-f503-4364-9757-98c213d89edc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "778b346c-f503-4364-9757-98c213d89edc" (UID: "778b346c-f503-4364-9757-98c213d89edc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.341008 5010 generic.go:334] "Generic (PLEG): container finished" podID="777b0b1e-96c3-4914-8b7b-d51186433cb7" containerID="64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b" exitCode=0 Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.341076 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pgxf" event={"ID":"777b0b1e-96c3-4914-8b7b-d51186433cb7","Type":"ContainerDied","Data":"64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b"} Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.341104 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pgxf" event={"ID":"777b0b1e-96c3-4914-8b7b-d51186433cb7","Type":"ContainerDied","Data":"3ee4a0547eec3952db79e960939ddf437d022a2d426d7a0f64071f60145150ba"} Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.341119 5010 scope.go:117] "RemoveContainer" containerID="64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.341239 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.347081 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a09b802-00fe-4ff8-983e-58c495061478-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a09b802-00fe-4ff8-983e-58c495061478" (UID: "5a09b802-00fe-4ff8-983e-58c495061478"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.351045 5010 generic.go:334] "Generic (PLEG): container finished" podID="778b346c-f503-4364-9757-98c213d89edc" containerID="d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08" exitCode=0 Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.351093 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w967c" event={"ID":"778b346c-f503-4364-9757-98c213d89edc","Type":"ContainerDied","Data":"d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08"} Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.351119 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w967c" event={"ID":"778b346c-f503-4364-9757-98c213d89edc","Type":"ContainerDied","Data":"ccc904854d56565749138df195a8c2b29f6946a5393227b9fe1b124f630fe4e6"} Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.351185 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.355279 5010 generic.go:334] "Generic (PLEG): container finished" podID="5a09b802-00fe-4ff8-983e-58c495061478" containerID="6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac" exitCode=0 Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.355348 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8ldc" event={"ID":"5a09b802-00fe-4ff8-983e-58c495061478","Type":"ContainerDied","Data":"6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac"} Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.355370 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8ldc" event={"ID":"5a09b802-00fe-4ff8-983e-58c495061478","Type":"ContainerDied","Data":"9b3e23c6c17315ac65a0626a6f5dc6fcfc45753c23f65c38f8420f31fc344706"} Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.355428 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.357782 5010 generic.go:334] "Generic (PLEG): container finished" podID="6b321403-09c3-4199-98ce-474deeea9d18" containerID="3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f" exitCode=0 Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.358146 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.358527 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhsmk" event={"ID":"6b321403-09c3-4199-98ce-474deeea9d18","Type":"ContainerDied","Data":"3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f"} Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.358637 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhsmk" event={"ID":"6b321403-09c3-4199-98ce-474deeea9d18","Type":"ContainerDied","Data":"63d8474bfb4a1a954341a0c6e3ac0ed4a51edc38981d0b3fd911b0c631516f52"} Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.359642 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" event={"ID":"72291d2a-e172-4670-9df7-c4de79cab1a1","Type":"ContainerStarted","Data":"bdcfcd819707a008d216ee28c8a59fdebeca7cc15a6cf4579f372782cccc49dd"} Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.360878 5010 generic.go:334] "Generic (PLEG): container finished" podID="1b5592be-8839-4660-a4c4-ab662fc975eb" containerID="a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b" exitCode=0 Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.360926 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" event={"ID":"1b5592be-8839-4660-a4c4-ab662fc975eb","Type":"ContainerDied","Data":"a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b"} Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.360945 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.361000 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" event={"ID":"1b5592be-8839-4660-a4c4-ab662fc975eb","Type":"ContainerDied","Data":"2ade3cdf2529ce4152b52a6e4a45299bf6c1e2325f1341f2c73a3d85ad1e71e8"} Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.376165 5010 scope.go:117] "RemoveContainer" containerID="8155e7f2f727e4e9e74359fe98f1783e8c9b620a89fe732296fe63f5146a208e" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.378939 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w967c"] Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.382250 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w967c"] Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.388934 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777b0b1e-96c3-4914-8b7b-d51186433cb7-utilities\") pod \"777b0b1e-96c3-4914-8b7b-d51186433cb7\" (UID: \"777b0b1e-96c3-4914-8b7b-d51186433cb7\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.389002 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777b0b1e-96c3-4914-8b7b-d51186433cb7-catalog-content\") pod \"777b0b1e-96c3-4914-8b7b-d51186433cb7\" (UID: \"777b0b1e-96c3-4914-8b7b-d51186433cb7\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.389036 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndvzg\" (UniqueName: \"kubernetes.io/projected/777b0b1e-96c3-4914-8b7b-d51186433cb7-kube-api-access-ndvzg\") pod \"777b0b1e-96c3-4914-8b7b-d51186433cb7\" (UID: \"777b0b1e-96c3-4914-8b7b-d51186433cb7\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.389435 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmnts\" (UniqueName: \"kubernetes.io/projected/1b5592be-8839-4660-a4c4-ab662fc975eb-kube-api-access-pmnts\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.389457 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a09b802-00fe-4ff8-983e-58c495061478-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.389470 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/778b346c-f503-4364-9757-98c213d89edc-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.389480 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw58w\" (UniqueName: \"kubernetes.io/projected/778b346c-f503-4364-9757-98c213d89edc-kube-api-access-mw58w\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.389491 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjvqs\" (UniqueName: \"kubernetes.io/projected/5a09b802-00fe-4ff8-983e-58c495061478-kube-api-access-vjvqs\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.389505 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/778b346c-f503-4364-9757-98c213d89edc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.389514 5010 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.389522 5010 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.389534 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a09b802-00fe-4ff8-983e-58c495061478-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.389799 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/777b0b1e-96c3-4914-8b7b-d51186433cb7-utilities" (OuterVolumeSpecName: "utilities") pod "777b0b1e-96c3-4914-8b7b-d51186433cb7" (UID: "777b0b1e-96c3-4914-8b7b-d51186433cb7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.392525 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/777b0b1e-96c3-4914-8b7b-d51186433cb7-kube-api-access-ndvzg" (OuterVolumeSpecName: "kube-api-access-ndvzg") pod "777b0b1e-96c3-4914-8b7b-d51186433cb7" (UID: "777b0b1e-96c3-4914-8b7b-d51186433cb7"). InnerVolumeSpecName "kube-api-access-ndvzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.411358 5010 scope.go:117] "RemoveContainer" containerID="fca3a0de046b6aa0bbd88f4d836f2482bd38d25ab3a9c5bce8610c44b5a5caf1" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.423818 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lskbc"] Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.450428 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rhsmk"] Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.452294 5010 scope.go:117] "RemoveContainer" containerID="64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.458235 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rhsmk"] Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.458958 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6kg4f"] Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.459751 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b\": container with ID starting with 64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b not found: ID does not exist" containerID="64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.459803 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b"} err="failed to get container status \"64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b\": rpc error: code = NotFound desc = could not find container \"64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b\": container with ID starting with 64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.459833 5010 scope.go:117] "RemoveContainer" containerID="8155e7f2f727e4e9e74359fe98f1783e8c9b620a89fe732296fe63f5146a208e" Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.460302 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8155e7f2f727e4e9e74359fe98f1783e8c9b620a89fe732296fe63f5146a208e\": container with ID starting with 8155e7f2f727e4e9e74359fe98f1783e8c9b620a89fe732296fe63f5146a208e not found: ID does not exist" containerID="8155e7f2f727e4e9e74359fe98f1783e8c9b620a89fe732296fe63f5146a208e" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.460332 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8155e7f2f727e4e9e74359fe98f1783e8c9b620a89fe732296fe63f5146a208e"} err="failed to get container status \"8155e7f2f727e4e9e74359fe98f1783e8c9b620a89fe732296fe63f5146a208e\": rpc error: code = NotFound desc = could not find container \"8155e7f2f727e4e9e74359fe98f1783e8c9b620a89fe732296fe63f5146a208e\": container with ID starting with 8155e7f2f727e4e9e74359fe98f1783e8c9b620a89fe732296fe63f5146a208e not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.460352 5010 scope.go:117] "RemoveContainer" containerID="fca3a0de046b6aa0bbd88f4d836f2482bd38d25ab3a9c5bce8610c44b5a5caf1" Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.460962 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fca3a0de046b6aa0bbd88f4d836f2482bd38d25ab3a9c5bce8610c44b5a5caf1\": container with ID starting with fca3a0de046b6aa0bbd88f4d836f2482bd38d25ab3a9c5bce8610c44b5a5caf1 not found: ID does not exist" containerID="fca3a0de046b6aa0bbd88f4d836f2482bd38d25ab3a9c5bce8610c44b5a5caf1" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.461005 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fca3a0de046b6aa0bbd88f4d836f2482bd38d25ab3a9c5bce8610c44b5a5caf1"} err="failed to get container status \"fca3a0de046b6aa0bbd88f4d836f2482bd38d25ab3a9c5bce8610c44b5a5caf1\": rpc error: code = NotFound desc = could not find container \"fca3a0de046b6aa0bbd88f4d836f2482bd38d25ab3a9c5bce8610c44b5a5caf1\": container with ID starting with fca3a0de046b6aa0bbd88f4d836f2482bd38d25ab3a9c5bce8610c44b5a5caf1 not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.461033 5010 scope.go:117] "RemoveContainer" containerID="d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.461710 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6kg4f"] Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.477104 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f8ldc"] Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.482813 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f8ldc"] Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.485766 5010 scope.go:117] "RemoveContainer" containerID="699afee0a95665e8a36e41507d5ccbe7b3ccff56912d72c7d06a736bf812bbdd" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.494647 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndvzg\" (UniqueName: \"kubernetes.io/projected/777b0b1e-96c3-4914-8b7b-d51186433cb7-kube-api-access-ndvzg\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.494673 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777b0b1e-96c3-4914-8b7b-d51186433cb7-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.509269 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b5592be-8839-4660-a4c4-ab662fc975eb" path="/var/lib/kubelet/pods/1b5592be-8839-4660-a4c4-ab662fc975eb/volumes" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.510398 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a09b802-00fe-4ff8-983e-58c495061478" path="/var/lib/kubelet/pods/5a09b802-00fe-4ff8-983e-58c495061478/volumes" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.511274 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b321403-09c3-4199-98ce-474deeea9d18" path="/var/lib/kubelet/pods/6b321403-09c3-4199-98ce-474deeea9d18/volumes" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.512356 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="778b346c-f503-4364-9757-98c213d89edc" path="/var/lib/kubelet/pods/778b346c-f503-4364-9757-98c213d89edc/volumes" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.522498 5010 scope.go:117] "RemoveContainer" containerID="c81b301246f1acefeee01e3df5b61b48f31087c63825e8dbd41865fd47f36a39" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.539183 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/777b0b1e-96c3-4914-8b7b-d51186433cb7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "777b0b1e-96c3-4914-8b7b-d51186433cb7" (UID: "777b0b1e-96c3-4914-8b7b-d51186433cb7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.545394 5010 scope.go:117] "RemoveContainer" containerID="d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08" Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.545789 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08\": container with ID starting with d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08 not found: ID does not exist" containerID="d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.545826 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08"} err="failed to get container status \"d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08\": rpc error: code = NotFound desc = could not find container \"d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08\": container with ID starting with d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08 not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.545855 5010 scope.go:117] "RemoveContainer" containerID="699afee0a95665e8a36e41507d5ccbe7b3ccff56912d72c7d06a736bf812bbdd" Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.546339 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"699afee0a95665e8a36e41507d5ccbe7b3ccff56912d72c7d06a736bf812bbdd\": container with ID starting with 699afee0a95665e8a36e41507d5ccbe7b3ccff56912d72c7d06a736bf812bbdd not found: ID does not exist" containerID="699afee0a95665e8a36e41507d5ccbe7b3ccff56912d72c7d06a736bf812bbdd" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.546398 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"699afee0a95665e8a36e41507d5ccbe7b3ccff56912d72c7d06a736bf812bbdd"} err="failed to get container status \"699afee0a95665e8a36e41507d5ccbe7b3ccff56912d72c7d06a736bf812bbdd\": rpc error: code = NotFound desc = could not find container \"699afee0a95665e8a36e41507d5ccbe7b3ccff56912d72c7d06a736bf812bbdd\": container with ID starting with 699afee0a95665e8a36e41507d5ccbe7b3ccff56912d72c7d06a736bf812bbdd not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.546424 5010 scope.go:117] "RemoveContainer" containerID="c81b301246f1acefeee01e3df5b61b48f31087c63825e8dbd41865fd47f36a39" Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.546759 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c81b301246f1acefeee01e3df5b61b48f31087c63825e8dbd41865fd47f36a39\": container with ID starting with c81b301246f1acefeee01e3df5b61b48f31087c63825e8dbd41865fd47f36a39 not found: ID does not exist" containerID="c81b301246f1acefeee01e3df5b61b48f31087c63825e8dbd41865fd47f36a39" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.546784 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c81b301246f1acefeee01e3df5b61b48f31087c63825e8dbd41865fd47f36a39"} err="failed to get container status \"c81b301246f1acefeee01e3df5b61b48f31087c63825e8dbd41865fd47f36a39\": rpc error: code = NotFound desc = could not find container \"c81b301246f1acefeee01e3df5b61b48f31087c63825e8dbd41865fd47f36a39\": container with ID starting with c81b301246f1acefeee01e3df5b61b48f31087c63825e8dbd41865fd47f36a39 not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.546822 5010 scope.go:117] "RemoveContainer" containerID="6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.563733 5010 scope.go:117] "RemoveContainer" containerID="f7246dd3bc99c4cd6a1502b56f24cd3f2d35a480eabcd5540eeeffabedaf8c50" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.577105 5010 scope.go:117] "RemoveContainer" containerID="fb38973c90eca1b297983e38725d0efd4de1191c9f324379b771a27b35bf9908" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.595402 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777b0b1e-96c3-4914-8b7b-d51186433cb7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.596337 5010 scope.go:117] "RemoveContainer" containerID="6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac" Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.596816 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac\": container with ID starting with 6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac not found: ID does not exist" containerID="6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.596846 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac"} err="failed to get container status \"6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac\": rpc error: code = NotFound desc = could not find container \"6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac\": container with ID starting with 6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.596888 5010 scope.go:117] "RemoveContainer" containerID="f7246dd3bc99c4cd6a1502b56f24cd3f2d35a480eabcd5540eeeffabedaf8c50" Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.597263 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7246dd3bc99c4cd6a1502b56f24cd3f2d35a480eabcd5540eeeffabedaf8c50\": container with ID starting with f7246dd3bc99c4cd6a1502b56f24cd3f2d35a480eabcd5540eeeffabedaf8c50 not found: ID does not exist" containerID="f7246dd3bc99c4cd6a1502b56f24cd3f2d35a480eabcd5540eeeffabedaf8c50" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.597298 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7246dd3bc99c4cd6a1502b56f24cd3f2d35a480eabcd5540eeeffabedaf8c50"} err="failed to get container status \"f7246dd3bc99c4cd6a1502b56f24cd3f2d35a480eabcd5540eeeffabedaf8c50\": rpc error: code = NotFound desc = could not find container \"f7246dd3bc99c4cd6a1502b56f24cd3f2d35a480eabcd5540eeeffabedaf8c50\": container with ID starting with f7246dd3bc99c4cd6a1502b56f24cd3f2d35a480eabcd5540eeeffabedaf8c50 not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.597318 5010 scope.go:117] "RemoveContainer" containerID="fb38973c90eca1b297983e38725d0efd4de1191c9f324379b771a27b35bf9908" Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.597634 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb38973c90eca1b297983e38725d0efd4de1191c9f324379b771a27b35bf9908\": container with ID starting with fb38973c90eca1b297983e38725d0efd4de1191c9f324379b771a27b35bf9908 not found: ID does not exist" containerID="fb38973c90eca1b297983e38725d0efd4de1191c9f324379b771a27b35bf9908" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.597663 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb38973c90eca1b297983e38725d0efd4de1191c9f324379b771a27b35bf9908"} err="failed to get container status \"fb38973c90eca1b297983e38725d0efd4de1191c9f324379b771a27b35bf9908\": rpc error: code = NotFound desc = could not find container \"fb38973c90eca1b297983e38725d0efd4de1191c9f324379b771a27b35bf9908\": container with ID starting with fb38973c90eca1b297983e38725d0efd4de1191c9f324379b771a27b35bf9908 not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.597757 5010 scope.go:117] "RemoveContainer" containerID="3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.620670 5010 scope.go:117] "RemoveContainer" containerID="ad30fa1f7476d320a459e2e205f7b55a08c426642d715abf9ce2c1d8b8336f6e" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.634781 5010 scope.go:117] "RemoveContainer" containerID="bcd8a889807bd25445dfb722549faf19cd01bc11e1f8fd1048942ecd1b7beb47" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.647381 5010 scope.go:117] "RemoveContainer" containerID="3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f" Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.647776 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f\": container with ID starting with 3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f not found: ID does not exist" containerID="3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.647812 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f"} err="failed to get container status \"3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f\": rpc error: code = NotFound desc = could not find container \"3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f\": container with ID starting with 3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.647835 5010 scope.go:117] "RemoveContainer" containerID="ad30fa1f7476d320a459e2e205f7b55a08c426642d715abf9ce2c1d8b8336f6e" Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.648061 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad30fa1f7476d320a459e2e205f7b55a08c426642d715abf9ce2c1d8b8336f6e\": container with ID starting with ad30fa1f7476d320a459e2e205f7b55a08c426642d715abf9ce2c1d8b8336f6e not found: ID does not exist" containerID="ad30fa1f7476d320a459e2e205f7b55a08c426642d715abf9ce2c1d8b8336f6e" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.648093 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad30fa1f7476d320a459e2e205f7b55a08c426642d715abf9ce2c1d8b8336f6e"} err="failed to get container status \"ad30fa1f7476d320a459e2e205f7b55a08c426642d715abf9ce2c1d8b8336f6e\": rpc error: code = NotFound desc = could not find container \"ad30fa1f7476d320a459e2e205f7b55a08c426642d715abf9ce2c1d8b8336f6e\": container with ID starting with ad30fa1f7476d320a459e2e205f7b55a08c426642d715abf9ce2c1d8b8336f6e not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.648116 5010 scope.go:117] "RemoveContainer" containerID="bcd8a889807bd25445dfb722549faf19cd01bc11e1f8fd1048942ecd1b7beb47" Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.648369 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcd8a889807bd25445dfb722549faf19cd01bc11e1f8fd1048942ecd1b7beb47\": container with ID starting with bcd8a889807bd25445dfb722549faf19cd01bc11e1f8fd1048942ecd1b7beb47 not found: ID does not exist" containerID="bcd8a889807bd25445dfb722549faf19cd01bc11e1f8fd1048942ecd1b7beb47" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.648415 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcd8a889807bd25445dfb722549faf19cd01bc11e1f8fd1048942ecd1b7beb47"} err="failed to get container status \"bcd8a889807bd25445dfb722549faf19cd01bc11e1f8fd1048942ecd1b7beb47\": rpc error: code = NotFound desc = could not find container \"bcd8a889807bd25445dfb722549faf19cd01bc11e1f8fd1048942ecd1b7beb47\": container with ID starting with bcd8a889807bd25445dfb722549faf19cd01bc11e1f8fd1048942ecd1b7beb47 not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.648449 5010 scope.go:117] "RemoveContainer" containerID="a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.669534 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5pgxf"] Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.669706 5010 scope.go:117] "RemoveContainer" containerID="a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b" Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.671083 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b\": container with ID starting with a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b not found: ID does not exist" containerID="a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.671114 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b"} err="failed to get container status \"a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b\": rpc error: code = NotFound desc = could not find container \"a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b\": container with ID starting with a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.674686 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5pgxf"] Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.367558 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" event={"ID":"72291d2a-e172-4670-9df7-c4de79cab1a1","Type":"ContainerStarted","Data":"d2a25ce869bce00299f0a36e2bb34ce27b46d433c773c7af24e6c88b7046ec27"} Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.368793 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.370366 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" event={"ID":"a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe","Type":"ContainerStarted","Data":"0c5d00a618b4fe3bf12bea8272155363e2ac87eb3b57761a6bc995e47e6d7e8e"} Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.370413 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" event={"ID":"a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe","Type":"ContainerStarted","Data":"8c053b62d9c03e959bb50f47c15edab6c6f4fc5f6b6bd852c66e0416a6f03de1"} Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.370606 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.374418 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.398765 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" podStartSLOduration=2.398745445 podStartE2EDuration="2.398745445s" podCreationTimestamp="2026-02-03 10:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:08:33.393584198 +0000 UTC m=+383.549560337" watchObservedRunningTime="2026-02-03 10:08:33.398745445 +0000 UTC m=+383.554721574" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.423367 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" podStartSLOduration=2.423352002 podStartE2EDuration="2.423352002s" podCreationTimestamp="2026-02-03 10:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:08:33.420094505 +0000 UTC m=+383.576070634" watchObservedRunningTime="2026-02-03 10:08:33.423352002 +0000 UTC m=+383.579328131" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.614938 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-96wzf"] Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615179 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a09b802-00fe-4ff8-983e-58c495061478" containerName="extract-content" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615195 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a09b802-00fe-4ff8-983e-58c495061478" containerName="extract-content" Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615207 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777b0b1e-96c3-4914-8b7b-d51186433cb7" containerName="extract-utilities" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615234 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="777b0b1e-96c3-4914-8b7b-d51186433cb7" containerName="extract-utilities" Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615245 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b5592be-8839-4660-a4c4-ab662fc975eb" containerName="marketplace-operator" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615254 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b5592be-8839-4660-a4c4-ab662fc975eb" containerName="marketplace-operator" Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615266 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b321403-09c3-4199-98ce-474deeea9d18" containerName="extract-utilities" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615275 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b321403-09c3-4199-98ce-474deeea9d18" containerName="extract-utilities" Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615285 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="778b346c-f503-4364-9757-98c213d89edc" containerName="extract-utilities" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615293 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="778b346c-f503-4364-9757-98c213d89edc" containerName="extract-utilities" Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615306 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a09b802-00fe-4ff8-983e-58c495061478" containerName="extract-utilities" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615315 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a09b802-00fe-4ff8-983e-58c495061478" containerName="extract-utilities" Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615327 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a09b802-00fe-4ff8-983e-58c495061478" containerName="registry-server" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615335 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a09b802-00fe-4ff8-983e-58c495061478" containerName="registry-server" Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615347 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777b0b1e-96c3-4914-8b7b-d51186433cb7" containerName="extract-content" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615355 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="777b0b1e-96c3-4914-8b7b-d51186433cb7" containerName="extract-content" Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615366 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b321403-09c3-4199-98ce-474deeea9d18" containerName="extract-content" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615374 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b321403-09c3-4199-98ce-474deeea9d18" containerName="extract-content" Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615387 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b321403-09c3-4199-98ce-474deeea9d18" containerName="registry-server" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615409 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b321403-09c3-4199-98ce-474deeea9d18" containerName="registry-server" Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615423 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777b0b1e-96c3-4914-8b7b-d51186433cb7" containerName="registry-server" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615431 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="777b0b1e-96c3-4914-8b7b-d51186433cb7" containerName="registry-server" Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615455 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="778b346c-f503-4364-9757-98c213d89edc" containerName="extract-content" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615462 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="778b346c-f503-4364-9757-98c213d89edc" containerName="extract-content" Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615473 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="778b346c-f503-4364-9757-98c213d89edc" containerName="registry-server" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615481 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="778b346c-f503-4364-9757-98c213d89edc" containerName="registry-server" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615603 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b5592be-8839-4660-a4c4-ab662fc975eb" containerName="marketplace-operator" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615622 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a09b802-00fe-4ff8-983e-58c495061478" containerName="registry-server" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615637 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b321403-09c3-4199-98ce-474deeea9d18" containerName="registry-server" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615647 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="778b346c-f503-4364-9757-98c213d89edc" containerName="registry-server" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615655 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="777b0b1e-96c3-4914-8b7b-d51186433cb7" containerName="registry-server" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.616489 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.621858 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.627193 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-96wzf"] Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.709553 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a04fc61-013a-4515-92ca-e620b3d376d5-utilities\") pod \"redhat-marketplace-96wzf\" (UID: \"0a04fc61-013a-4515-92ca-e620b3d376d5\") " pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.709626 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a04fc61-013a-4515-92ca-e620b3d376d5-catalog-content\") pod \"redhat-marketplace-96wzf\" (UID: \"0a04fc61-013a-4515-92ca-e620b3d376d5\") " pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.709869 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdrmx\" (UniqueName: \"kubernetes.io/projected/0a04fc61-013a-4515-92ca-e620b3d376d5-kube-api-access-jdrmx\") pod \"redhat-marketplace-96wzf\" (UID: \"0a04fc61-013a-4515-92ca-e620b3d376d5\") " pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.811365 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdrmx\" (UniqueName: \"kubernetes.io/projected/0a04fc61-013a-4515-92ca-e620b3d376d5-kube-api-access-jdrmx\") pod \"redhat-marketplace-96wzf\" (UID: \"0a04fc61-013a-4515-92ca-e620b3d376d5\") " pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.811459 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a04fc61-013a-4515-92ca-e620b3d376d5-utilities\") pod \"redhat-marketplace-96wzf\" (UID: \"0a04fc61-013a-4515-92ca-e620b3d376d5\") " pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.811477 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a04fc61-013a-4515-92ca-e620b3d376d5-catalog-content\") pod \"redhat-marketplace-96wzf\" (UID: \"0a04fc61-013a-4515-92ca-e620b3d376d5\") " pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.811895 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a04fc61-013a-4515-92ca-e620b3d376d5-catalog-content\") pod \"redhat-marketplace-96wzf\" (UID: \"0a04fc61-013a-4515-92ca-e620b3d376d5\") " pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.811957 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a04fc61-013a-4515-92ca-e620b3d376d5-utilities\") pod \"redhat-marketplace-96wzf\" (UID: \"0a04fc61-013a-4515-92ca-e620b3d376d5\") " pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.815605 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gz7lx"] Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.817247 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.820432 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.825187 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gz7lx"] Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.837412 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdrmx\" (UniqueName: \"kubernetes.io/projected/0a04fc61-013a-4515-92ca-e620b3d376d5-kube-api-access-jdrmx\") pod \"redhat-marketplace-96wzf\" (UID: \"0a04fc61-013a-4515-92ca-e620b3d376d5\") " pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.912335 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwmm5\" (UniqueName: \"kubernetes.io/projected/1b4caad6-6b6c-452e-9be8-97e7115dbd72-kube-api-access-qwmm5\") pod \"redhat-operators-gz7lx\" (UID: \"1b4caad6-6b6c-452e-9be8-97e7115dbd72\") " pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.912396 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b4caad6-6b6c-452e-9be8-97e7115dbd72-utilities\") pod \"redhat-operators-gz7lx\" (UID: \"1b4caad6-6b6c-452e-9be8-97e7115dbd72\") " pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.912429 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b4caad6-6b6c-452e-9be8-97e7115dbd72-catalog-content\") pod \"redhat-operators-gz7lx\" (UID: \"1b4caad6-6b6c-452e-9be8-97e7115dbd72\") " pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.941804 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:34 crc kubenswrapper[5010]: I0203 10:08:34.013615 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwmm5\" (UniqueName: \"kubernetes.io/projected/1b4caad6-6b6c-452e-9be8-97e7115dbd72-kube-api-access-qwmm5\") pod \"redhat-operators-gz7lx\" (UID: \"1b4caad6-6b6c-452e-9be8-97e7115dbd72\") " pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:34 crc kubenswrapper[5010]: I0203 10:08:34.013690 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b4caad6-6b6c-452e-9be8-97e7115dbd72-utilities\") pod \"redhat-operators-gz7lx\" (UID: \"1b4caad6-6b6c-452e-9be8-97e7115dbd72\") " pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:34 crc kubenswrapper[5010]: I0203 10:08:34.013757 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b4caad6-6b6c-452e-9be8-97e7115dbd72-catalog-content\") pod \"redhat-operators-gz7lx\" (UID: \"1b4caad6-6b6c-452e-9be8-97e7115dbd72\") " pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:34 crc kubenswrapper[5010]: I0203 10:08:34.014420 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b4caad6-6b6c-452e-9be8-97e7115dbd72-catalog-content\") pod \"redhat-operators-gz7lx\" (UID: \"1b4caad6-6b6c-452e-9be8-97e7115dbd72\") " pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:34 crc kubenswrapper[5010]: I0203 10:08:34.016564 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b4caad6-6b6c-452e-9be8-97e7115dbd72-utilities\") pod \"redhat-operators-gz7lx\" (UID: \"1b4caad6-6b6c-452e-9be8-97e7115dbd72\") " pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:34 crc kubenswrapper[5010]: I0203 10:08:34.037321 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwmm5\" (UniqueName: \"kubernetes.io/projected/1b4caad6-6b6c-452e-9be8-97e7115dbd72-kube-api-access-qwmm5\") pod \"redhat-operators-gz7lx\" (UID: \"1b4caad6-6b6c-452e-9be8-97e7115dbd72\") " pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:34 crc kubenswrapper[5010]: I0203 10:08:34.136752 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:34 crc kubenswrapper[5010]: I0203 10:08:34.422142 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-96wzf"] Feb 03 10:08:34 crc kubenswrapper[5010]: I0203 10:08:34.508962 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="777b0b1e-96c3-4914-8b7b-d51186433cb7" path="/var/lib/kubelet/pods/777b0b1e-96c3-4914-8b7b-d51186433cb7/volumes" Feb 03 10:08:34 crc kubenswrapper[5010]: I0203 10:08:34.530020 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gz7lx"] Feb 03 10:08:34 crc kubenswrapper[5010]: W0203 10:08:34.535905 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b4caad6_6b6c_452e_9be8_97e7115dbd72.slice/crio-6507aa35e6193590d1824596d95b8a0a21eb5c3a7b78806fc58e58b064b04809 WatchSource:0}: Error finding container 6507aa35e6193590d1824596d95b8a0a21eb5c3a7b78806fc58e58b064b04809: Status 404 returned error can't find the container with id 6507aa35e6193590d1824596d95b8a0a21eb5c3a7b78806fc58e58b064b04809 Feb 03 10:08:35 crc kubenswrapper[5010]: I0203 10:08:35.387906 5010 generic.go:334] "Generic (PLEG): container finished" podID="1b4caad6-6b6c-452e-9be8-97e7115dbd72" containerID="649d5d5889619b3db5484b734f48a0f661f1b37c23ecc0ba2567cbcf312dac49" exitCode=0 Feb 03 10:08:35 crc kubenswrapper[5010]: I0203 10:08:35.388267 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gz7lx" event={"ID":"1b4caad6-6b6c-452e-9be8-97e7115dbd72","Type":"ContainerDied","Data":"649d5d5889619b3db5484b734f48a0f661f1b37c23ecc0ba2567cbcf312dac49"} Feb 03 10:08:35 crc kubenswrapper[5010]: I0203 10:08:35.388704 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gz7lx" event={"ID":"1b4caad6-6b6c-452e-9be8-97e7115dbd72","Type":"ContainerStarted","Data":"6507aa35e6193590d1824596d95b8a0a21eb5c3a7b78806fc58e58b064b04809"} Feb 03 10:08:35 crc kubenswrapper[5010]: I0203 10:08:35.390895 5010 generic.go:334] "Generic (PLEG): container finished" podID="0a04fc61-013a-4515-92ca-e620b3d376d5" containerID="4a9e4cdd3bd69602ab7a8af75d7d073fc432b07568f14b8b4f8329cc3a161d22" exitCode=0 Feb 03 10:08:35 crc kubenswrapper[5010]: I0203 10:08:35.391031 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96wzf" event={"ID":"0a04fc61-013a-4515-92ca-e620b3d376d5","Type":"ContainerDied","Data":"4a9e4cdd3bd69602ab7a8af75d7d073fc432b07568f14b8b4f8329cc3a161d22"} Feb 03 10:08:35 crc kubenswrapper[5010]: I0203 10:08:35.391057 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96wzf" event={"ID":"0a04fc61-013a-4515-92ca-e620b3d376d5","Type":"ContainerStarted","Data":"fead6303a7ed8b14298a3b3d0e23569f8415a1b5b1c37a523c55ffa0829f0f01"} Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.022535 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7dtrz"] Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.023676 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.025483 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.028918 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7dtrz"] Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.041986 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41f0db19-3c04-4062-94da-f2058d7ef64a-catalog-content\") pod \"community-operators-7dtrz\" (UID: \"41f0db19-3c04-4062-94da-f2058d7ef64a\") " pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.042035 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5s9r\" (UniqueName: \"kubernetes.io/projected/41f0db19-3c04-4062-94da-f2058d7ef64a-kube-api-access-z5s9r\") pod \"community-operators-7dtrz\" (UID: \"41f0db19-3c04-4062-94da-f2058d7ef64a\") " pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.042252 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41f0db19-3c04-4062-94da-f2058d7ef64a-utilities\") pod \"community-operators-7dtrz\" (UID: \"41f0db19-3c04-4062-94da-f2058d7ef64a\") " pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.143562 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41f0db19-3c04-4062-94da-f2058d7ef64a-catalog-content\") pod \"community-operators-7dtrz\" (UID: \"41f0db19-3c04-4062-94da-f2058d7ef64a\") " pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.143600 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5s9r\" (UniqueName: \"kubernetes.io/projected/41f0db19-3c04-4062-94da-f2058d7ef64a-kube-api-access-z5s9r\") pod \"community-operators-7dtrz\" (UID: \"41f0db19-3c04-4062-94da-f2058d7ef64a\") " pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.143655 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41f0db19-3c04-4062-94da-f2058d7ef64a-utilities\") pod \"community-operators-7dtrz\" (UID: \"41f0db19-3c04-4062-94da-f2058d7ef64a\") " pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.144165 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41f0db19-3c04-4062-94da-f2058d7ef64a-catalog-content\") pod \"community-operators-7dtrz\" (UID: \"41f0db19-3c04-4062-94da-f2058d7ef64a\") " pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.144248 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41f0db19-3c04-4062-94da-f2058d7ef64a-utilities\") pod \"community-operators-7dtrz\" (UID: \"41f0db19-3c04-4062-94da-f2058d7ef64a\") " pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.162487 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5s9r\" (UniqueName: \"kubernetes.io/projected/41f0db19-3c04-4062-94da-f2058d7ef64a-kube-api-access-z5s9r\") pod \"community-operators-7dtrz\" (UID: \"41f0db19-3c04-4062-94da-f2058d7ef64a\") " pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.216118 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xwfjv"] Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.217368 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.223744 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.226265 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xwfjv"] Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.249787 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/499eebdd-1202-4427-bf19-7ff14c5f8507-utilities\") pod \"certified-operators-xwfjv\" (UID: \"499eebdd-1202-4427-bf19-7ff14c5f8507\") " pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.249901 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/499eebdd-1202-4427-bf19-7ff14c5f8507-catalog-content\") pod \"certified-operators-xwfjv\" (UID: \"499eebdd-1202-4427-bf19-7ff14c5f8507\") " pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.249967 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzhfz\" (UniqueName: \"kubernetes.io/projected/499eebdd-1202-4427-bf19-7ff14c5f8507-kube-api-access-tzhfz\") pod \"certified-operators-xwfjv\" (UID: \"499eebdd-1202-4427-bf19-7ff14c5f8507\") " pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.341086 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.351857 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/499eebdd-1202-4427-bf19-7ff14c5f8507-utilities\") pod \"certified-operators-xwfjv\" (UID: \"499eebdd-1202-4427-bf19-7ff14c5f8507\") " pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.352151 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/499eebdd-1202-4427-bf19-7ff14c5f8507-catalog-content\") pod \"certified-operators-xwfjv\" (UID: \"499eebdd-1202-4427-bf19-7ff14c5f8507\") " pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.352306 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzhfz\" (UniqueName: \"kubernetes.io/projected/499eebdd-1202-4427-bf19-7ff14c5f8507-kube-api-access-tzhfz\") pod \"certified-operators-xwfjv\" (UID: \"499eebdd-1202-4427-bf19-7ff14c5f8507\") " pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.352589 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/499eebdd-1202-4427-bf19-7ff14c5f8507-utilities\") pod \"certified-operators-xwfjv\" (UID: \"499eebdd-1202-4427-bf19-7ff14c5f8507\") " pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.352666 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/499eebdd-1202-4427-bf19-7ff14c5f8507-catalog-content\") pod \"certified-operators-xwfjv\" (UID: \"499eebdd-1202-4427-bf19-7ff14c5f8507\") " pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.370284 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzhfz\" (UniqueName: \"kubernetes.io/projected/499eebdd-1202-4427-bf19-7ff14c5f8507-kube-api-access-tzhfz\") pod \"certified-operators-xwfjv\" (UID: \"499eebdd-1202-4427-bf19-7ff14c5f8507\") " pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.568126 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.752924 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7dtrz"] Feb 03 10:08:36 crc kubenswrapper[5010]: W0203 10:08:36.755876 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41f0db19_3c04_4062_94da_f2058d7ef64a.slice/crio-f33131e79e384fe2afe7360729e82c466d2dd7daf96c2ed6415e011ae52ad36a WatchSource:0}: Error finding container f33131e79e384fe2afe7360729e82c466d2dd7daf96c2ed6415e011ae52ad36a: Status 404 returned error can't find the container with id f33131e79e384fe2afe7360729e82c466d2dd7daf96c2ed6415e011ae52ad36a Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.966527 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xwfjv"] Feb 03 10:08:36 crc kubenswrapper[5010]: W0203 10:08:36.975429 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod499eebdd_1202_4427_bf19_7ff14c5f8507.slice/crio-9930daa4cb3b17269e2f4ce3847ee42981d1f4d57104af430b72251a6b0c459e WatchSource:0}: Error finding container 9930daa4cb3b17269e2f4ce3847ee42981d1f4d57104af430b72251a6b0c459e: Status 404 returned error can't find the container with id 9930daa4cb3b17269e2f4ce3847ee42981d1f4d57104af430b72251a6b0c459e Feb 03 10:08:37 crc kubenswrapper[5010]: I0203 10:08:37.403181 5010 generic.go:334] "Generic (PLEG): container finished" podID="41f0db19-3c04-4062-94da-f2058d7ef64a" containerID="b063329c753357a7ed3b9d6bec1638bc687c9277b9fe6b16859d4133fd1fc6a0" exitCode=0 Feb 03 10:08:37 crc kubenswrapper[5010]: I0203 10:08:37.403634 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dtrz" event={"ID":"41f0db19-3c04-4062-94da-f2058d7ef64a","Type":"ContainerDied","Data":"b063329c753357a7ed3b9d6bec1638bc687c9277b9fe6b16859d4133fd1fc6a0"} Feb 03 10:08:37 crc kubenswrapper[5010]: I0203 10:08:37.403668 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dtrz" event={"ID":"41f0db19-3c04-4062-94da-f2058d7ef64a","Type":"ContainerStarted","Data":"f33131e79e384fe2afe7360729e82c466d2dd7daf96c2ed6415e011ae52ad36a"} Feb 03 10:08:37 crc kubenswrapper[5010]: I0203 10:08:37.409167 5010 generic.go:334] "Generic (PLEG): container finished" podID="1b4caad6-6b6c-452e-9be8-97e7115dbd72" containerID="f947c5d43a1cd178ea6882c8a748cf0e0703d0960f92472c74bb48b670787162" exitCode=0 Feb 03 10:08:37 crc kubenswrapper[5010]: I0203 10:08:37.409234 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gz7lx" event={"ID":"1b4caad6-6b6c-452e-9be8-97e7115dbd72","Type":"ContainerDied","Data":"f947c5d43a1cd178ea6882c8a748cf0e0703d0960f92472c74bb48b670787162"} Feb 03 10:08:37 crc kubenswrapper[5010]: I0203 10:08:37.411614 5010 generic.go:334] "Generic (PLEG): container finished" podID="0a04fc61-013a-4515-92ca-e620b3d376d5" containerID="66e007c709fe7f7d9122d566e528247b7a5744b4d9c113cda7640fdb7f2392b8" exitCode=0 Feb 03 10:08:37 crc kubenswrapper[5010]: I0203 10:08:37.411677 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96wzf" event={"ID":"0a04fc61-013a-4515-92ca-e620b3d376d5","Type":"ContainerDied","Data":"66e007c709fe7f7d9122d566e528247b7a5744b4d9c113cda7640fdb7f2392b8"} Feb 03 10:08:37 crc kubenswrapper[5010]: I0203 10:08:37.414408 5010 generic.go:334] "Generic (PLEG): container finished" podID="499eebdd-1202-4427-bf19-7ff14c5f8507" containerID="266977f1c8826bf4506937bae4a2203a1b45ad313184b03a6022c3e9a2e18bec" exitCode=0 Feb 03 10:08:37 crc kubenswrapper[5010]: I0203 10:08:37.414450 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwfjv" event={"ID":"499eebdd-1202-4427-bf19-7ff14c5f8507","Type":"ContainerDied","Data":"266977f1c8826bf4506937bae4a2203a1b45ad313184b03a6022c3e9a2e18bec"} Feb 03 10:08:37 crc kubenswrapper[5010]: I0203 10:08:37.414473 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwfjv" event={"ID":"499eebdd-1202-4427-bf19-7ff14c5f8507","Type":"ContainerStarted","Data":"9930daa4cb3b17269e2f4ce3847ee42981d1f4d57104af430b72251a6b0c459e"} Feb 03 10:08:38 crc kubenswrapper[5010]: I0203 10:08:38.422094 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gz7lx" event={"ID":"1b4caad6-6b6c-452e-9be8-97e7115dbd72","Type":"ContainerStarted","Data":"1bcfe5244cc922aa84a6a40e4680d517665f0a49f6f2b53318e7bc167e38eb2c"} Feb 03 10:08:38 crc kubenswrapper[5010]: I0203 10:08:38.424072 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96wzf" event={"ID":"0a04fc61-013a-4515-92ca-e620b3d376d5","Type":"ContainerStarted","Data":"d68bd3a14f1325b87821010ebd48ce066009ad4fb502b7564ded43783c7668c5"} Feb 03 10:08:38 crc kubenswrapper[5010]: I0203 10:08:38.426043 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwfjv" event={"ID":"499eebdd-1202-4427-bf19-7ff14c5f8507","Type":"ContainerStarted","Data":"c034b90c14f164a1c9b318b6bbc9cdbc987ea84f86b5e9e8ddfd80264db9be8a"} Feb 03 10:08:38 crc kubenswrapper[5010]: I0203 10:08:38.427791 5010 generic.go:334] "Generic (PLEG): container finished" podID="41f0db19-3c04-4062-94da-f2058d7ef64a" containerID="c4b36012a304b17c9b9fadc9e622391ff6944a242cfef1aba9de2a55aeb56508" exitCode=0 Feb 03 10:08:38 crc kubenswrapper[5010]: I0203 10:08:38.427873 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dtrz" event={"ID":"41f0db19-3c04-4062-94da-f2058d7ef64a","Type":"ContainerDied","Data":"c4b36012a304b17c9b9fadc9e622391ff6944a242cfef1aba9de2a55aeb56508"} Feb 03 10:08:38 crc kubenswrapper[5010]: I0203 10:08:38.441005 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gz7lx" podStartSLOduration=2.92142193 podStartE2EDuration="5.440981411s" podCreationTimestamp="2026-02-03 10:08:33 +0000 UTC" firstStartedPulling="2026-02-03 10:08:35.389669803 +0000 UTC m=+385.545645932" lastFinishedPulling="2026-02-03 10:08:37.909229284 +0000 UTC m=+388.065205413" observedRunningTime="2026-02-03 10:08:38.439177263 +0000 UTC m=+388.595153392" watchObservedRunningTime="2026-02-03 10:08:38.440981411 +0000 UTC m=+388.596957540" Feb 03 10:08:38 crc kubenswrapper[5010]: I0203 10:08:38.497978 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-96wzf" podStartSLOduration=3.071213866 podStartE2EDuration="5.497960591s" podCreationTimestamp="2026-02-03 10:08:33 +0000 UTC" firstStartedPulling="2026-02-03 10:08:35.39218786 +0000 UTC m=+385.548163989" lastFinishedPulling="2026-02-03 10:08:37.818934585 +0000 UTC m=+387.974910714" observedRunningTime="2026-02-03 10:08:38.497811447 +0000 UTC m=+388.653787576" watchObservedRunningTime="2026-02-03 10:08:38.497960591 +0000 UTC m=+388.653936710" Feb 03 10:08:39 crc kubenswrapper[5010]: I0203 10:08:39.434826 5010 generic.go:334] "Generic (PLEG): container finished" podID="499eebdd-1202-4427-bf19-7ff14c5f8507" containerID="c034b90c14f164a1c9b318b6bbc9cdbc987ea84f86b5e9e8ddfd80264db9be8a" exitCode=0 Feb 03 10:08:39 crc kubenswrapper[5010]: I0203 10:08:39.434891 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwfjv" event={"ID":"499eebdd-1202-4427-bf19-7ff14c5f8507","Type":"ContainerDied","Data":"c034b90c14f164a1c9b318b6bbc9cdbc987ea84f86b5e9e8ddfd80264db9be8a"} Feb 03 10:08:39 crc kubenswrapper[5010]: I0203 10:08:39.438046 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dtrz" event={"ID":"41f0db19-3c04-4062-94da-f2058d7ef64a","Type":"ContainerStarted","Data":"74d0cf58551154d549c0dbe2e4f90b363b89d18105a1678c5ba367f1463377c5"} Feb 03 10:08:39 crc kubenswrapper[5010]: I0203 10:08:39.475535 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7dtrz" podStartSLOduration=2.024677904 podStartE2EDuration="3.475517322s" podCreationTimestamp="2026-02-03 10:08:36 +0000 UTC" firstStartedPulling="2026-02-03 10:08:37.405632548 +0000 UTC m=+387.561608677" lastFinishedPulling="2026-02-03 10:08:38.856471966 +0000 UTC m=+389.012448095" observedRunningTime="2026-02-03 10:08:39.473459427 +0000 UTC m=+389.629435566" watchObservedRunningTime="2026-02-03 10:08:39.475517322 +0000 UTC m=+389.631493461" Feb 03 10:08:40 crc kubenswrapper[5010]: I0203 10:08:40.446109 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwfjv" event={"ID":"499eebdd-1202-4427-bf19-7ff14c5f8507","Type":"ContainerStarted","Data":"7a0f898c466476b945015975c9dbd85cf2a00daec2e0f2e319af85c44444d2b7"} Feb 03 10:08:40 crc kubenswrapper[5010]: I0203 10:08:40.481107 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xwfjv" podStartSLOduration=1.957640925 podStartE2EDuration="4.48109092s" podCreationTimestamp="2026-02-03 10:08:36 +0000 UTC" firstStartedPulling="2026-02-03 10:08:37.415705227 +0000 UTC m=+387.571681356" lastFinishedPulling="2026-02-03 10:08:39.939155222 +0000 UTC m=+390.095131351" observedRunningTime="2026-02-03 10:08:40.476064906 +0000 UTC m=+390.632041035" watchObservedRunningTime="2026-02-03 10:08:40.48109092 +0000 UTC m=+390.637067049" Feb 03 10:08:43 crc kubenswrapper[5010]: I0203 10:08:43.943100 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:43 crc kubenswrapper[5010]: I0203 10:08:43.943532 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:43 crc kubenswrapper[5010]: I0203 10:08:43.988277 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:44 crc kubenswrapper[5010]: I0203 10:08:44.138314 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:44 crc kubenswrapper[5010]: I0203 10:08:44.138353 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:44 crc kubenswrapper[5010]: I0203 10:08:44.181656 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:44 crc kubenswrapper[5010]: I0203 10:08:44.499078 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:44 crc kubenswrapper[5010]: I0203 10:08:44.499539 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:46 crc kubenswrapper[5010]: I0203 10:08:46.342190 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:46 crc kubenswrapper[5010]: I0203 10:08:46.342591 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:46 crc kubenswrapper[5010]: I0203 10:08:46.389306 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:46 crc kubenswrapper[5010]: I0203 10:08:46.389852 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:08:46 crc kubenswrapper[5010]: I0203 10:08:46.389990 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:08:46 crc kubenswrapper[5010]: I0203 10:08:46.513741 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:46 crc kubenswrapper[5010]: I0203 10:08:46.569068 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:46 crc kubenswrapper[5010]: I0203 10:08:46.570088 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:46 crc kubenswrapper[5010]: I0203 10:08:46.611056 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:47 crc kubenswrapper[5010]: I0203 10:08:47.520373 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:52 crc kubenswrapper[5010]: I0203 10:08:52.554768 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:52 crc kubenswrapper[5010]: I0203 10:08:52.612318 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x857s"] Feb 03 10:09:16 crc kubenswrapper[5010]: I0203 10:09:16.390280 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:09:16 crc kubenswrapper[5010]: I0203 10:09:16.390765 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:09:16 crc kubenswrapper[5010]: I0203 10:09:16.390805 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:09:16 crc kubenswrapper[5010]: I0203 10:09:16.391389 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f50e55cc732f578ead4018fcd8ab51937afcd54061bf1c5885e82d08d42bd4d4"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:09:16 crc kubenswrapper[5010]: I0203 10:09:16.391442 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://f50e55cc732f578ead4018fcd8ab51937afcd54061bf1c5885e82d08d42bd4d4" gracePeriod=600 Feb 03 10:09:16 crc kubenswrapper[5010]: I0203 10:09:16.668689 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="f50e55cc732f578ead4018fcd8ab51937afcd54061bf1c5885e82d08d42bd4d4" exitCode=0 Feb 03 10:09:16 crc kubenswrapper[5010]: I0203 10:09:16.669041 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"f50e55cc732f578ead4018fcd8ab51937afcd54061bf1c5885e82d08d42bd4d4"} Feb 03 10:09:16 crc kubenswrapper[5010]: I0203 10:09:16.669081 5010 scope.go:117] "RemoveContainer" containerID="48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb" Feb 03 10:09:17 crc kubenswrapper[5010]: I0203 10:09:17.676627 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"7590c7f71cb1479ef753f84e11bac9c523014434d96f673572f6202b5d5157c6"} Feb 03 10:09:17 crc kubenswrapper[5010]: I0203 10:09:17.689070 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" podUID="594e9304-c63f-4d73-bcad-5258c1ebdd6d" containerName="registry" containerID="cri-o://4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27" gracePeriod=30 Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.081454 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.131820 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/594e9304-c63f-4d73-bcad-5258c1ebdd6d-registry-certificates\") pod \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.131891 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-bound-sa-token\") pod \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.131922 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/594e9304-c63f-4d73-bcad-5258c1ebdd6d-ca-trust-extracted\") pod \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.131939 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/594e9304-c63f-4d73-bcad-5258c1ebdd6d-installation-pull-secrets\") pod \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.132107 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.132142 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-registry-tls\") pod \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.132162 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/594e9304-c63f-4d73-bcad-5258c1ebdd6d-trusted-ca\") pod \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.132233 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf8k7\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-kube-api-access-mf8k7\") pod \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.133001 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/594e9304-c63f-4d73-bcad-5258c1ebdd6d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "594e9304-c63f-4d73-bcad-5258c1ebdd6d" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.133334 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/594e9304-c63f-4d73-bcad-5258c1ebdd6d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "594e9304-c63f-4d73-bcad-5258c1ebdd6d" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.137625 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "594e9304-c63f-4d73-bcad-5258c1ebdd6d" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.137847 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/594e9304-c63f-4d73-bcad-5258c1ebdd6d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "594e9304-c63f-4d73-bcad-5258c1ebdd6d" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.138355 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-kube-api-access-mf8k7" (OuterVolumeSpecName: "kube-api-access-mf8k7") pod "594e9304-c63f-4d73-bcad-5258c1ebdd6d" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d"). InnerVolumeSpecName "kube-api-access-mf8k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.141271 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "594e9304-c63f-4d73-bcad-5258c1ebdd6d" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.146361 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "594e9304-c63f-4d73-bcad-5258c1ebdd6d" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.150373 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/594e9304-c63f-4d73-bcad-5258c1ebdd6d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "594e9304-c63f-4d73-bcad-5258c1ebdd6d" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.233605 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf8k7\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-kube-api-access-mf8k7\") on node \"crc\" DevicePath \"\"" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.233651 5010 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/594e9304-c63f-4d73-bcad-5258c1ebdd6d-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.233661 5010 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.233670 5010 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/594e9304-c63f-4d73-bcad-5258c1ebdd6d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.233681 5010 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/594e9304-c63f-4d73-bcad-5258c1ebdd6d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.233689 5010 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.233697 5010 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/594e9304-c63f-4d73-bcad-5258c1ebdd6d-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.684067 5010 generic.go:334] "Generic (PLEG): container finished" podID="594e9304-c63f-4d73-bcad-5258c1ebdd6d" containerID="4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27" exitCode=0 Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.684553 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.684882 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" event={"ID":"594e9304-c63f-4d73-bcad-5258c1ebdd6d","Type":"ContainerDied","Data":"4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27"} Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.684921 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" event={"ID":"594e9304-c63f-4d73-bcad-5258c1ebdd6d","Type":"ContainerDied","Data":"4d0c21608e47f2a5fbe71a063022d5430ee94df368929ef6f0cd30bef83d5cd9"} Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.684940 5010 scope.go:117] "RemoveContainer" containerID="4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.707813 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x857s"] Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.709926 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x857s"] Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.714339 5010 scope.go:117] "RemoveContainer" containerID="4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27" Feb 03 10:09:18 crc kubenswrapper[5010]: E0203 10:09:18.715025 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27\": container with ID starting with 4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27 not found: ID does not exist" containerID="4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.715068 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27"} err="failed to get container status \"4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27\": rpc error: code = NotFound desc = could not find container \"4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27\": container with ID starting with 4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27 not found: ID does not exist" Feb 03 10:09:20 crc kubenswrapper[5010]: I0203 10:09:20.513066 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="594e9304-c63f-4d73-bcad-5258c1ebdd6d" path="/var/lib/kubelet/pods/594e9304-c63f-4d73-bcad-5258c1ebdd6d/volumes" Feb 03 10:11:10 crc kubenswrapper[5010]: I0203 10:11:10.717156 5010 scope.go:117] "RemoveContainer" containerID="9193e654b0aae87a0f6cb66b87865bff8d5a0d8845927c6e2ff446174e9141b4" Feb 03 10:11:16 crc kubenswrapper[5010]: I0203 10:11:16.389963 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:11:16 crc kubenswrapper[5010]: I0203 10:11:16.390500 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:11:46 crc kubenswrapper[5010]: I0203 10:11:46.390098 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:11:46 crc kubenswrapper[5010]: I0203 10:11:46.390824 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:12:16 crc kubenswrapper[5010]: I0203 10:12:16.390422 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:12:16 crc kubenswrapper[5010]: I0203 10:12:16.391059 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:12:16 crc kubenswrapper[5010]: I0203 10:12:16.391147 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:12:16 crc kubenswrapper[5010]: I0203 10:12:16.391985 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7590c7f71cb1479ef753f84e11bac9c523014434d96f673572f6202b5d5157c6"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:12:16 crc kubenswrapper[5010]: I0203 10:12:16.392068 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://7590c7f71cb1479ef753f84e11bac9c523014434d96f673572f6202b5d5157c6" gracePeriod=600 Feb 03 10:12:16 crc kubenswrapper[5010]: I0203 10:12:16.673128 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="7590c7f71cb1479ef753f84e11bac9c523014434d96f673572f6202b5d5157c6" exitCode=0 Feb 03 10:12:16 crc kubenswrapper[5010]: I0203 10:12:16.673235 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"7590c7f71cb1479ef753f84e11bac9c523014434d96f673572f6202b5d5157c6"} Feb 03 10:12:16 crc kubenswrapper[5010]: I0203 10:12:16.673574 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"8680190c062bea3a65ab9dd9a4d956ebc68c414b2e8a2f0c41a9c5b1c0cfad9d"} Feb 03 10:12:16 crc kubenswrapper[5010]: I0203 10:12:16.673603 5010 scope.go:117] "RemoveContainer" containerID="f50e55cc732f578ead4018fcd8ab51937afcd54061bf1c5885e82d08d42bd4d4" Feb 03 10:13:10 crc kubenswrapper[5010]: I0203 10:13:10.772992 5010 scope.go:117] "RemoveContainer" containerID="aafef9981fa7d11562eb0bd58e7300535437ad38c9714ffedb6d952272ad69e5" Feb 03 10:14:16 crc kubenswrapper[5010]: I0203 10:14:16.389632 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:14:16 crc kubenswrapper[5010]: I0203 10:14:16.390190 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:14:46 crc kubenswrapper[5010]: I0203 10:14:46.390303 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:14:46 crc kubenswrapper[5010]: I0203 10:14:46.390804 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:14:54 crc kubenswrapper[5010]: I0203 10:14:54.411968 5010 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.176261 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz"] Feb 03 10:15:00 crc kubenswrapper[5010]: E0203 10:15:00.176747 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="594e9304-c63f-4d73-bcad-5258c1ebdd6d" containerName="registry" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.176759 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="594e9304-c63f-4d73-bcad-5258c1ebdd6d" containerName="registry" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.176897 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="594e9304-c63f-4d73-bcad-5258c1ebdd6d" containerName="registry" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.177431 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.180009 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.181771 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.183435 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz"] Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.328253 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0eae17d2-2362-4e78-908b-42fcb386ec60-config-volume\") pod \"collect-profiles-29501895-dwjmz\" (UID: \"0eae17d2-2362-4e78-908b-42fcb386ec60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.328326 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p24nv\" (UniqueName: \"kubernetes.io/projected/0eae17d2-2362-4e78-908b-42fcb386ec60-kube-api-access-p24nv\") pod \"collect-profiles-29501895-dwjmz\" (UID: \"0eae17d2-2362-4e78-908b-42fcb386ec60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.328442 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0eae17d2-2362-4e78-908b-42fcb386ec60-secret-volume\") pod \"collect-profiles-29501895-dwjmz\" (UID: \"0eae17d2-2362-4e78-908b-42fcb386ec60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.429262 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p24nv\" (UniqueName: \"kubernetes.io/projected/0eae17d2-2362-4e78-908b-42fcb386ec60-kube-api-access-p24nv\") pod \"collect-profiles-29501895-dwjmz\" (UID: \"0eae17d2-2362-4e78-908b-42fcb386ec60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.429354 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0eae17d2-2362-4e78-908b-42fcb386ec60-secret-volume\") pod \"collect-profiles-29501895-dwjmz\" (UID: \"0eae17d2-2362-4e78-908b-42fcb386ec60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.429377 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0eae17d2-2362-4e78-908b-42fcb386ec60-config-volume\") pod \"collect-profiles-29501895-dwjmz\" (UID: \"0eae17d2-2362-4e78-908b-42fcb386ec60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.430243 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0eae17d2-2362-4e78-908b-42fcb386ec60-config-volume\") pod \"collect-profiles-29501895-dwjmz\" (UID: \"0eae17d2-2362-4e78-908b-42fcb386ec60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.439881 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0eae17d2-2362-4e78-908b-42fcb386ec60-secret-volume\") pod \"collect-profiles-29501895-dwjmz\" (UID: \"0eae17d2-2362-4e78-908b-42fcb386ec60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.449872 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p24nv\" (UniqueName: \"kubernetes.io/projected/0eae17d2-2362-4e78-908b-42fcb386ec60-kube-api-access-p24nv\") pod \"collect-profiles-29501895-dwjmz\" (UID: \"0eae17d2-2362-4e78-908b-42fcb386ec60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.495025 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.692539 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz"] Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.784849 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" event={"ID":"0eae17d2-2362-4e78-908b-42fcb386ec60","Type":"ContainerStarted","Data":"8d324f3579b6cb0c90918bb39a82d082a4c75658003709ed685fa0043f912d2e"} Feb 03 10:15:01 crc kubenswrapper[5010]: I0203 10:15:01.793544 5010 generic.go:334] "Generic (PLEG): container finished" podID="0eae17d2-2362-4e78-908b-42fcb386ec60" containerID="73db75a439822b6dd55d522e4da89fbd20aa66ab67d412f72f9dfe07016f6245" exitCode=0 Feb 03 10:15:01 crc kubenswrapper[5010]: I0203 10:15:01.793612 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" event={"ID":"0eae17d2-2362-4e78-908b-42fcb386ec60","Type":"ContainerDied","Data":"73db75a439822b6dd55d522e4da89fbd20aa66ab67d412f72f9dfe07016f6245"} Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.026540 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.180550 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0eae17d2-2362-4e78-908b-42fcb386ec60-secret-volume\") pod \"0eae17d2-2362-4e78-908b-42fcb386ec60\" (UID: \"0eae17d2-2362-4e78-908b-42fcb386ec60\") " Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.180752 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p24nv\" (UniqueName: \"kubernetes.io/projected/0eae17d2-2362-4e78-908b-42fcb386ec60-kube-api-access-p24nv\") pod \"0eae17d2-2362-4e78-908b-42fcb386ec60\" (UID: \"0eae17d2-2362-4e78-908b-42fcb386ec60\") " Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.180889 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0eae17d2-2362-4e78-908b-42fcb386ec60-config-volume\") pod \"0eae17d2-2362-4e78-908b-42fcb386ec60\" (UID: \"0eae17d2-2362-4e78-908b-42fcb386ec60\") " Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.182543 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eae17d2-2362-4e78-908b-42fcb386ec60-config-volume" (OuterVolumeSpecName: "config-volume") pod "0eae17d2-2362-4e78-908b-42fcb386ec60" (UID: "0eae17d2-2362-4e78-908b-42fcb386ec60"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.188596 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0eae17d2-2362-4e78-908b-42fcb386ec60-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0eae17d2-2362-4e78-908b-42fcb386ec60" (UID: "0eae17d2-2362-4e78-908b-42fcb386ec60"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.188964 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eae17d2-2362-4e78-908b-42fcb386ec60-kube-api-access-p24nv" (OuterVolumeSpecName: "kube-api-access-p24nv") pod "0eae17d2-2362-4e78-908b-42fcb386ec60" (UID: "0eae17d2-2362-4e78-908b-42fcb386ec60"). InnerVolumeSpecName "kube-api-access-p24nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.283629 5010 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0eae17d2-2362-4e78-908b-42fcb386ec60-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.283690 5010 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0eae17d2-2362-4e78-908b-42fcb386ec60-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.283702 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p24nv\" (UniqueName: \"kubernetes.io/projected/0eae17d2-2362-4e78-908b-42fcb386ec60-kube-api-access-p24nv\") on node \"crc\" DevicePath \"\"" Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.806977 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" event={"ID":"0eae17d2-2362-4e78-908b-42fcb386ec60","Type":"ContainerDied","Data":"8d324f3579b6cb0c90918bb39a82d082a4c75658003709ed685fa0043f912d2e"} Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.807030 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d324f3579b6cb0c90918bb39a82d082a4c75658003709ed685fa0043f912d2e" Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.807124 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.555256 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-krrwt"] Feb 03 10:15:11 crc kubenswrapper[5010]: E0203 10:15:11.556837 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eae17d2-2362-4e78-908b-42fcb386ec60" containerName="collect-profiles" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.556917 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eae17d2-2362-4e78-908b-42fcb386ec60" containerName="collect-profiles" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.557064 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eae17d2-2362-4e78-908b-42fcb386ec60" containerName="collect-profiles" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.557904 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.566740 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-krrwt"] Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.683828 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/237c1de5-296b-44bc-91d7-c22e7c476939-catalog-content\") pod \"certified-operators-krrwt\" (UID: \"237c1de5-296b-44bc-91d7-c22e7c476939\") " pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.684248 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbtwv\" (UniqueName: \"kubernetes.io/projected/237c1de5-296b-44bc-91d7-c22e7c476939-kube-api-access-sbtwv\") pod \"certified-operators-krrwt\" (UID: \"237c1de5-296b-44bc-91d7-c22e7c476939\") " pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.684353 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/237c1de5-296b-44bc-91d7-c22e7c476939-utilities\") pod \"certified-operators-krrwt\" (UID: \"237c1de5-296b-44bc-91d7-c22e7c476939\") " pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.786175 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbtwv\" (UniqueName: \"kubernetes.io/projected/237c1de5-296b-44bc-91d7-c22e7c476939-kube-api-access-sbtwv\") pod \"certified-operators-krrwt\" (UID: \"237c1de5-296b-44bc-91d7-c22e7c476939\") " pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.786508 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/237c1de5-296b-44bc-91d7-c22e7c476939-utilities\") pod \"certified-operators-krrwt\" (UID: \"237c1de5-296b-44bc-91d7-c22e7c476939\") " pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.786659 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/237c1de5-296b-44bc-91d7-c22e7c476939-catalog-content\") pod \"certified-operators-krrwt\" (UID: \"237c1de5-296b-44bc-91d7-c22e7c476939\") " pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.787204 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/237c1de5-296b-44bc-91d7-c22e7c476939-utilities\") pod \"certified-operators-krrwt\" (UID: \"237c1de5-296b-44bc-91d7-c22e7c476939\") " pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.787398 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/237c1de5-296b-44bc-91d7-c22e7c476939-catalog-content\") pod \"certified-operators-krrwt\" (UID: \"237c1de5-296b-44bc-91d7-c22e7c476939\") " pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.807449 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbtwv\" (UniqueName: \"kubernetes.io/projected/237c1de5-296b-44bc-91d7-c22e7c476939-kube-api-access-sbtwv\") pod \"certified-operators-krrwt\" (UID: \"237c1de5-296b-44bc-91d7-c22e7c476939\") " pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.874230 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:12 crc kubenswrapper[5010]: I0203 10:15:12.115617 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-krrwt"] Feb 03 10:15:12 crc kubenswrapper[5010]: I0203 10:15:12.849432 5010 generic.go:334] "Generic (PLEG): container finished" podID="237c1de5-296b-44bc-91d7-c22e7c476939" containerID="d3ab4e92e5996b7fa02f99acb8c39257d71c3a1a272930f96f8b06ae29dee06c" exitCode=0 Feb 03 10:15:12 crc kubenswrapper[5010]: I0203 10:15:12.849484 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krrwt" event={"ID":"237c1de5-296b-44bc-91d7-c22e7c476939","Type":"ContainerDied","Data":"d3ab4e92e5996b7fa02f99acb8c39257d71c3a1a272930f96f8b06ae29dee06c"} Feb 03 10:15:12 crc kubenswrapper[5010]: I0203 10:15:12.849733 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krrwt" event={"ID":"237c1de5-296b-44bc-91d7-c22e7c476939","Type":"ContainerStarted","Data":"2b643800be8a8e15452559b1220b173cfe0c49c5dc8916864c4c014b46512dcd"} Feb 03 10:15:12 crc kubenswrapper[5010]: I0203 10:15:12.851090 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 10:15:13 crc kubenswrapper[5010]: I0203 10:15:13.860503 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krrwt" event={"ID":"237c1de5-296b-44bc-91d7-c22e7c476939","Type":"ContainerStarted","Data":"76e314709270ca219606fa9e7365adf198797d39f37685c2a5767d9f7b45fca7"} Feb 03 10:15:14 crc kubenswrapper[5010]: I0203 10:15:14.867409 5010 generic.go:334] "Generic (PLEG): container finished" podID="237c1de5-296b-44bc-91d7-c22e7c476939" containerID="76e314709270ca219606fa9e7365adf198797d39f37685c2a5767d9f7b45fca7" exitCode=0 Feb 03 10:15:14 crc kubenswrapper[5010]: I0203 10:15:14.867732 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krrwt" event={"ID":"237c1de5-296b-44bc-91d7-c22e7c476939","Type":"ContainerDied","Data":"76e314709270ca219606fa9e7365adf198797d39f37685c2a5767d9f7b45fca7"} Feb 03 10:15:15 crc kubenswrapper[5010]: I0203 10:15:15.875038 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krrwt" event={"ID":"237c1de5-296b-44bc-91d7-c22e7c476939","Type":"ContainerStarted","Data":"b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1"} Feb 03 10:15:15 crc kubenswrapper[5010]: I0203 10:15:15.896950 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-krrwt" podStartSLOduration=2.490613839 podStartE2EDuration="4.896930696s" podCreationTimestamp="2026-02-03 10:15:11 +0000 UTC" firstStartedPulling="2026-02-03 10:15:12.850665747 +0000 UTC m=+783.006641876" lastFinishedPulling="2026-02-03 10:15:15.256982604 +0000 UTC m=+785.412958733" observedRunningTime="2026-02-03 10:15:15.894699171 +0000 UTC m=+786.050675300" watchObservedRunningTime="2026-02-03 10:15:15.896930696 +0000 UTC m=+786.052906825" Feb 03 10:15:16 crc kubenswrapper[5010]: I0203 10:15:16.390621 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:15:16 crc kubenswrapper[5010]: I0203 10:15:16.391195 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:15:16 crc kubenswrapper[5010]: I0203 10:15:16.391286 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:15:16 crc kubenswrapper[5010]: I0203 10:15:16.392203 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8680190c062bea3a65ab9dd9a4d956ebc68c414b2e8a2f0c41a9c5b1c0cfad9d"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:15:16 crc kubenswrapper[5010]: I0203 10:15:16.392354 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://8680190c062bea3a65ab9dd9a4d956ebc68c414b2e8a2f0c41a9c5b1c0cfad9d" gracePeriod=600 Feb 03 10:15:16 crc kubenswrapper[5010]: I0203 10:15:16.885093 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="8680190c062bea3a65ab9dd9a4d956ebc68c414b2e8a2f0c41a9c5b1c0cfad9d" exitCode=0 Feb 03 10:15:16 crc kubenswrapper[5010]: I0203 10:15:16.885195 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"8680190c062bea3a65ab9dd9a4d956ebc68c414b2e8a2f0c41a9c5b1c0cfad9d"} Feb 03 10:15:16 crc kubenswrapper[5010]: I0203 10:15:16.885309 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"9442102e724f69e1d556f61f5773f0e8e33b6a283cb3f40b3f679d223bc6c1e0"} Feb 03 10:15:16 crc kubenswrapper[5010]: I0203 10:15:16.885335 5010 scope.go:117] "RemoveContainer" containerID="7590c7f71cb1479ef753f84e11bac9c523014434d96f673572f6202b5d5157c6" Feb 03 10:15:18 crc kubenswrapper[5010]: I0203 10:15:18.940715 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h5jw9"] Feb 03 10:15:18 crc kubenswrapper[5010]: I0203 10:15:18.944051 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:18 crc kubenswrapper[5010]: I0203 10:15:18.955903 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h5jw9"] Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.090601 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76lr9\" (UniqueName: \"kubernetes.io/projected/2d5ec45e-19ce-4629-a3e8-66e3053a1649-kube-api-access-76lr9\") pod \"redhat-marketplace-h5jw9\" (UID: \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\") " pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.091003 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d5ec45e-19ce-4629-a3e8-66e3053a1649-utilities\") pod \"redhat-marketplace-h5jw9\" (UID: \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\") " pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.091035 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d5ec45e-19ce-4629-a3e8-66e3053a1649-catalog-content\") pod \"redhat-marketplace-h5jw9\" (UID: \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\") " pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.193117 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76lr9\" (UniqueName: \"kubernetes.io/projected/2d5ec45e-19ce-4629-a3e8-66e3053a1649-kube-api-access-76lr9\") pod \"redhat-marketplace-h5jw9\" (UID: \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\") " pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.193334 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d5ec45e-19ce-4629-a3e8-66e3053a1649-utilities\") pod \"redhat-marketplace-h5jw9\" (UID: \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\") " pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.193408 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d5ec45e-19ce-4629-a3e8-66e3053a1649-catalog-content\") pod \"redhat-marketplace-h5jw9\" (UID: \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\") " pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.194040 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d5ec45e-19ce-4629-a3e8-66e3053a1649-utilities\") pod \"redhat-marketplace-h5jw9\" (UID: \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\") " pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.194116 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d5ec45e-19ce-4629-a3e8-66e3053a1649-catalog-content\") pod \"redhat-marketplace-h5jw9\" (UID: \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\") " pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.217004 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76lr9\" (UniqueName: \"kubernetes.io/projected/2d5ec45e-19ce-4629-a3e8-66e3053a1649-kube-api-access-76lr9\") pod \"redhat-marketplace-h5jw9\" (UID: \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\") " pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.267559 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.496691 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h5jw9"] Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.911904 5010 generic.go:334] "Generic (PLEG): container finished" podID="2d5ec45e-19ce-4629-a3e8-66e3053a1649" containerID="f476da553dd3185056d6cb30158a1a71f539fd0830528640dea4259b97612386" exitCode=0 Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.911983 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h5jw9" event={"ID":"2d5ec45e-19ce-4629-a3e8-66e3053a1649","Type":"ContainerDied","Data":"f476da553dd3185056d6cb30158a1a71f539fd0830528640dea4259b97612386"} Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.912012 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h5jw9" event={"ID":"2d5ec45e-19ce-4629-a3e8-66e3053a1649","Type":"ContainerStarted","Data":"886a6e84902e3d168c9afbd1fdc0db0df45cb54090864e42049678385ba60527"} Feb 03 10:15:21 crc kubenswrapper[5010]: I0203 10:15:21.874370 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:21 crc kubenswrapper[5010]: I0203 10:15:21.874460 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:21 crc kubenswrapper[5010]: I0203 10:15:21.922421 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:21 crc kubenswrapper[5010]: I0203 10:15:21.968509 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:22 crc kubenswrapper[5010]: I0203 10:15:22.933820 5010 generic.go:334] "Generic (PLEG): container finished" podID="2d5ec45e-19ce-4629-a3e8-66e3053a1649" containerID="f485fbfbe73afe60190f2ee61a871aa2a88727244c98bffb3c96901dddc71559" exitCode=0 Feb 03 10:15:22 crc kubenswrapper[5010]: I0203 10:15:22.933940 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h5jw9" event={"ID":"2d5ec45e-19ce-4629-a3e8-66e3053a1649","Type":"ContainerDied","Data":"f485fbfbe73afe60190f2ee61a871aa2a88727244c98bffb3c96901dddc71559"} Feb 03 10:15:23 crc kubenswrapper[5010]: I0203 10:15:23.127364 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-krrwt"] Feb 03 10:15:23 crc kubenswrapper[5010]: I0203 10:15:23.941580 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h5jw9" event={"ID":"2d5ec45e-19ce-4629-a3e8-66e3053a1649","Type":"ContainerStarted","Data":"1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63"} Feb 03 10:15:23 crc kubenswrapper[5010]: I0203 10:15:23.941670 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-krrwt" podUID="237c1de5-296b-44bc-91d7-c22e7c476939" containerName="registry-server" containerID="cri-o://b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1" gracePeriod=2 Feb 03 10:15:23 crc kubenswrapper[5010]: I0203 10:15:23.964667 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h5jw9" podStartSLOduration=2.488907111 podStartE2EDuration="5.964646735s" podCreationTimestamp="2026-02-03 10:15:18 +0000 UTC" firstStartedPulling="2026-02-03 10:15:19.913403379 +0000 UTC m=+790.069379508" lastFinishedPulling="2026-02-03 10:15:23.389143003 +0000 UTC m=+793.545119132" observedRunningTime="2026-02-03 10:15:23.964340407 +0000 UTC m=+794.120316546" watchObservedRunningTime="2026-02-03 10:15:23.964646735 +0000 UTC m=+794.120622864" Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.267802 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.459245 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbtwv\" (UniqueName: \"kubernetes.io/projected/237c1de5-296b-44bc-91d7-c22e7c476939-kube-api-access-sbtwv\") pod \"237c1de5-296b-44bc-91d7-c22e7c476939\" (UID: \"237c1de5-296b-44bc-91d7-c22e7c476939\") " Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.459304 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/237c1de5-296b-44bc-91d7-c22e7c476939-catalog-content\") pod \"237c1de5-296b-44bc-91d7-c22e7c476939\" (UID: \"237c1de5-296b-44bc-91d7-c22e7c476939\") " Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.459346 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/237c1de5-296b-44bc-91d7-c22e7c476939-utilities\") pod \"237c1de5-296b-44bc-91d7-c22e7c476939\" (UID: \"237c1de5-296b-44bc-91d7-c22e7c476939\") " Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.460319 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/237c1de5-296b-44bc-91d7-c22e7c476939-utilities" (OuterVolumeSpecName: "utilities") pod "237c1de5-296b-44bc-91d7-c22e7c476939" (UID: "237c1de5-296b-44bc-91d7-c22e7c476939"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.465649 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/237c1de5-296b-44bc-91d7-c22e7c476939-kube-api-access-sbtwv" (OuterVolumeSpecName: "kube-api-access-sbtwv") pod "237c1de5-296b-44bc-91d7-c22e7c476939" (UID: "237c1de5-296b-44bc-91d7-c22e7c476939"). InnerVolumeSpecName "kube-api-access-sbtwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.560260 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbtwv\" (UniqueName: \"kubernetes.io/projected/237c1de5-296b-44bc-91d7-c22e7c476939-kube-api-access-sbtwv\") on node \"crc\" DevicePath \"\"" Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.560293 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/237c1de5-296b-44bc-91d7-c22e7c476939-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.574456 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/237c1de5-296b-44bc-91d7-c22e7c476939-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "237c1de5-296b-44bc-91d7-c22e7c476939" (UID: "237c1de5-296b-44bc-91d7-c22e7c476939"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.662564 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/237c1de5-296b-44bc-91d7-c22e7c476939-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.950091 5010 generic.go:334] "Generic (PLEG): container finished" podID="237c1de5-296b-44bc-91d7-c22e7c476939" containerID="b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1" exitCode=0 Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.950128 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.950149 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krrwt" event={"ID":"237c1de5-296b-44bc-91d7-c22e7c476939","Type":"ContainerDied","Data":"b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1"} Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.950248 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krrwt" event={"ID":"237c1de5-296b-44bc-91d7-c22e7c476939","Type":"ContainerDied","Data":"2b643800be8a8e15452559b1220b173cfe0c49c5dc8916864c4c014b46512dcd"} Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.950281 5010 scope.go:117] "RemoveContainer" containerID="b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1" Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.967542 5010 scope.go:117] "RemoveContainer" containerID="76e314709270ca219606fa9e7365adf198797d39f37685c2a5767d9f7b45fca7" Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.988614 5010 scope.go:117] "RemoveContainer" containerID="d3ab4e92e5996b7fa02f99acb8c39257d71c3a1a272930f96f8b06ae29dee06c" Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.988763 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-krrwt"] Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.990269 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-krrwt"] Feb 03 10:15:25 crc kubenswrapper[5010]: I0203 10:15:25.020565 5010 scope.go:117] "RemoveContainer" containerID="b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1" Feb 03 10:15:25 crc kubenswrapper[5010]: E0203 10:15:25.021118 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1\": container with ID starting with b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1 not found: ID does not exist" containerID="b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1" Feb 03 10:15:25 crc kubenswrapper[5010]: I0203 10:15:25.021238 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1"} err="failed to get container status \"b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1\": rpc error: code = NotFound desc = could not find container \"b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1\": container with ID starting with b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1 not found: ID does not exist" Feb 03 10:15:25 crc kubenswrapper[5010]: I0203 10:15:25.021333 5010 scope.go:117] "RemoveContainer" containerID="76e314709270ca219606fa9e7365adf198797d39f37685c2a5767d9f7b45fca7" Feb 03 10:15:25 crc kubenswrapper[5010]: E0203 10:15:25.022692 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76e314709270ca219606fa9e7365adf198797d39f37685c2a5767d9f7b45fca7\": container with ID starting with 76e314709270ca219606fa9e7365adf198797d39f37685c2a5767d9f7b45fca7 not found: ID does not exist" containerID="76e314709270ca219606fa9e7365adf198797d39f37685c2a5767d9f7b45fca7" Feb 03 10:15:25 crc kubenswrapper[5010]: I0203 10:15:25.022719 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76e314709270ca219606fa9e7365adf198797d39f37685c2a5767d9f7b45fca7"} err="failed to get container status \"76e314709270ca219606fa9e7365adf198797d39f37685c2a5767d9f7b45fca7\": rpc error: code = NotFound desc = could not find container \"76e314709270ca219606fa9e7365adf198797d39f37685c2a5767d9f7b45fca7\": container with ID starting with 76e314709270ca219606fa9e7365adf198797d39f37685c2a5767d9f7b45fca7 not found: ID does not exist" Feb 03 10:15:25 crc kubenswrapper[5010]: I0203 10:15:25.022736 5010 scope.go:117] "RemoveContainer" containerID="d3ab4e92e5996b7fa02f99acb8c39257d71c3a1a272930f96f8b06ae29dee06c" Feb 03 10:15:25 crc kubenswrapper[5010]: E0203 10:15:25.023152 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3ab4e92e5996b7fa02f99acb8c39257d71c3a1a272930f96f8b06ae29dee06c\": container with ID starting with d3ab4e92e5996b7fa02f99acb8c39257d71c3a1a272930f96f8b06ae29dee06c not found: ID does not exist" containerID="d3ab4e92e5996b7fa02f99acb8c39257d71c3a1a272930f96f8b06ae29dee06c" Feb 03 10:15:25 crc kubenswrapper[5010]: I0203 10:15:25.023201 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3ab4e92e5996b7fa02f99acb8c39257d71c3a1a272930f96f8b06ae29dee06c"} err="failed to get container status \"d3ab4e92e5996b7fa02f99acb8c39257d71c3a1a272930f96f8b06ae29dee06c\": rpc error: code = NotFound desc = could not find container \"d3ab4e92e5996b7fa02f99acb8c39257d71c3a1a272930f96f8b06ae29dee06c\": container with ID starting with d3ab4e92e5996b7fa02f99acb8c39257d71c3a1a272930f96f8b06ae29dee06c not found: ID does not exist" Feb 03 10:15:26 crc kubenswrapper[5010]: I0203 10:15:26.508895 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="237c1de5-296b-44bc-91d7-c22e7c476939" path="/var/lib/kubelet/pods/237c1de5-296b-44bc-91d7-c22e7c476939/volumes" Feb 03 10:15:29 crc kubenswrapper[5010]: I0203 10:15:29.268458 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:29 crc kubenswrapper[5010]: I0203 10:15:29.268817 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:29 crc kubenswrapper[5010]: I0203 10:15:29.308300 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:30 crc kubenswrapper[5010]: I0203 10:15:30.015899 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:30 crc kubenswrapper[5010]: I0203 10:15:30.056306 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h5jw9"] Feb 03 10:15:31 crc kubenswrapper[5010]: I0203 10:15:31.986693 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h5jw9" podUID="2d5ec45e-19ce-4629-a3e8-66e3053a1649" containerName="registry-server" containerID="cri-o://1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63" gracePeriod=2 Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.372787 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.560728 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d5ec45e-19ce-4629-a3e8-66e3053a1649-catalog-content\") pod \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\" (UID: \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\") " Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.561374 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76lr9\" (UniqueName: \"kubernetes.io/projected/2d5ec45e-19ce-4629-a3e8-66e3053a1649-kube-api-access-76lr9\") pod \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\" (UID: \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\") " Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.561494 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d5ec45e-19ce-4629-a3e8-66e3053a1649-utilities\") pod \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\" (UID: \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\") " Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.562618 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d5ec45e-19ce-4629-a3e8-66e3053a1649-utilities" (OuterVolumeSpecName: "utilities") pod "2d5ec45e-19ce-4629-a3e8-66e3053a1649" (UID: "2d5ec45e-19ce-4629-a3e8-66e3053a1649"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.569755 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d5ec45e-19ce-4629-a3e8-66e3053a1649-kube-api-access-76lr9" (OuterVolumeSpecName: "kube-api-access-76lr9") pod "2d5ec45e-19ce-4629-a3e8-66e3053a1649" (UID: "2d5ec45e-19ce-4629-a3e8-66e3053a1649"). InnerVolumeSpecName "kube-api-access-76lr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.589503 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d5ec45e-19ce-4629-a3e8-66e3053a1649-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d5ec45e-19ce-4629-a3e8-66e3053a1649" (UID: "2d5ec45e-19ce-4629-a3e8-66e3053a1649"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.663196 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76lr9\" (UniqueName: \"kubernetes.io/projected/2d5ec45e-19ce-4629-a3e8-66e3053a1649-kube-api-access-76lr9\") on node \"crc\" DevicePath \"\"" Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.663361 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d5ec45e-19ce-4629-a3e8-66e3053a1649-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.663392 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d5ec45e-19ce-4629-a3e8-66e3053a1649-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.993611 5010 generic.go:334] "Generic (PLEG): container finished" podID="2d5ec45e-19ce-4629-a3e8-66e3053a1649" containerID="1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63" exitCode=0 Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.993672 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.993676 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h5jw9" event={"ID":"2d5ec45e-19ce-4629-a3e8-66e3053a1649","Type":"ContainerDied","Data":"1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63"} Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.993845 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h5jw9" event={"ID":"2d5ec45e-19ce-4629-a3e8-66e3053a1649","Type":"ContainerDied","Data":"886a6e84902e3d168c9afbd1fdc0db0df45cb54090864e42049678385ba60527"} Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.993881 5010 scope.go:117] "RemoveContainer" containerID="1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63" Feb 03 10:15:33 crc kubenswrapper[5010]: I0203 10:15:33.011348 5010 scope.go:117] "RemoveContainer" containerID="f485fbfbe73afe60190f2ee61a871aa2a88727244c98bffb3c96901dddc71559" Feb 03 10:15:33 crc kubenswrapper[5010]: I0203 10:15:33.037663 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h5jw9"] Feb 03 10:15:33 crc kubenswrapper[5010]: I0203 10:15:33.039192 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h5jw9"] Feb 03 10:15:33 crc kubenswrapper[5010]: I0203 10:15:33.044851 5010 scope.go:117] "RemoveContainer" containerID="f476da553dd3185056d6cb30158a1a71f539fd0830528640dea4259b97612386" Feb 03 10:15:33 crc kubenswrapper[5010]: I0203 10:15:33.067695 5010 scope.go:117] "RemoveContainer" containerID="1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63" Feb 03 10:15:33 crc kubenswrapper[5010]: E0203 10:15:33.068299 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63\": container with ID starting with 1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63 not found: ID does not exist" containerID="1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63" Feb 03 10:15:33 crc kubenswrapper[5010]: I0203 10:15:33.068345 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63"} err="failed to get container status \"1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63\": rpc error: code = NotFound desc = could not find container \"1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63\": container with ID starting with 1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63 not found: ID does not exist" Feb 03 10:15:33 crc kubenswrapper[5010]: I0203 10:15:33.068366 5010 scope.go:117] "RemoveContainer" containerID="f485fbfbe73afe60190f2ee61a871aa2a88727244c98bffb3c96901dddc71559" Feb 03 10:15:33 crc kubenswrapper[5010]: E0203 10:15:33.068715 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f485fbfbe73afe60190f2ee61a871aa2a88727244c98bffb3c96901dddc71559\": container with ID starting with f485fbfbe73afe60190f2ee61a871aa2a88727244c98bffb3c96901dddc71559 not found: ID does not exist" containerID="f485fbfbe73afe60190f2ee61a871aa2a88727244c98bffb3c96901dddc71559" Feb 03 10:15:33 crc kubenswrapper[5010]: I0203 10:15:33.068748 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f485fbfbe73afe60190f2ee61a871aa2a88727244c98bffb3c96901dddc71559"} err="failed to get container status \"f485fbfbe73afe60190f2ee61a871aa2a88727244c98bffb3c96901dddc71559\": rpc error: code = NotFound desc = could not find container \"f485fbfbe73afe60190f2ee61a871aa2a88727244c98bffb3c96901dddc71559\": container with ID starting with f485fbfbe73afe60190f2ee61a871aa2a88727244c98bffb3c96901dddc71559 not found: ID does not exist" Feb 03 10:15:33 crc kubenswrapper[5010]: I0203 10:15:33.068764 5010 scope.go:117] "RemoveContainer" containerID="f476da553dd3185056d6cb30158a1a71f539fd0830528640dea4259b97612386" Feb 03 10:15:33 crc kubenswrapper[5010]: E0203 10:15:33.069064 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f476da553dd3185056d6cb30158a1a71f539fd0830528640dea4259b97612386\": container with ID starting with f476da553dd3185056d6cb30158a1a71f539fd0830528640dea4259b97612386 not found: ID does not exist" containerID="f476da553dd3185056d6cb30158a1a71f539fd0830528640dea4259b97612386" Feb 03 10:15:33 crc kubenswrapper[5010]: I0203 10:15:33.069090 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f476da553dd3185056d6cb30158a1a71f539fd0830528640dea4259b97612386"} err="failed to get container status \"f476da553dd3185056d6cb30158a1a71f539fd0830528640dea4259b97612386\": rpc error: code = NotFound desc = could not find container \"f476da553dd3185056d6cb30158a1a71f539fd0830528640dea4259b97612386\": container with ID starting with f476da553dd3185056d6cb30158a1a71f539fd0830528640dea4259b97612386 not found: ID does not exist" Feb 03 10:15:34 crc kubenswrapper[5010]: I0203 10:15:34.510041 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d5ec45e-19ce-4629-a3e8-66e3053a1649" path="/var/lib/kubelet/pods/2d5ec45e-19ce-4629-a3e8-66e3053a1649/volumes" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.972567 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-b5ngd"] Feb 03 10:16:47 crc kubenswrapper[5010]: E0203 10:16:47.973471 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5ec45e-19ce-4629-a3e8-66e3053a1649" containerName="extract-content" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.973490 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5ec45e-19ce-4629-a3e8-66e3053a1649" containerName="extract-content" Feb 03 10:16:47 crc kubenswrapper[5010]: E0203 10:16:47.973504 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5ec45e-19ce-4629-a3e8-66e3053a1649" containerName="registry-server" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.973511 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5ec45e-19ce-4629-a3e8-66e3053a1649" containerName="registry-server" Feb 03 10:16:47 crc kubenswrapper[5010]: E0203 10:16:47.973521 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237c1de5-296b-44bc-91d7-c22e7c476939" containerName="extract-utilities" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.973528 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="237c1de5-296b-44bc-91d7-c22e7c476939" containerName="extract-utilities" Feb 03 10:16:47 crc kubenswrapper[5010]: E0203 10:16:47.973542 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237c1de5-296b-44bc-91d7-c22e7c476939" containerName="extract-content" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.973550 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="237c1de5-296b-44bc-91d7-c22e7c476939" containerName="extract-content" Feb 03 10:16:47 crc kubenswrapper[5010]: E0203 10:16:47.973561 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237c1de5-296b-44bc-91d7-c22e7c476939" containerName="registry-server" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.973568 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="237c1de5-296b-44bc-91d7-c22e7c476939" containerName="registry-server" Feb 03 10:16:47 crc kubenswrapper[5010]: E0203 10:16:47.973577 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5ec45e-19ce-4629-a3e8-66e3053a1649" containerName="extract-utilities" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.973587 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5ec45e-19ce-4629-a3e8-66e3053a1649" containerName="extract-utilities" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.973701 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5ec45e-19ce-4629-a3e8-66e3053a1649" containerName="registry-server" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.973718 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="237c1de5-296b-44bc-91d7-c22e7c476939" containerName="registry-server" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.974243 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-b5ngd" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.975994 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.976200 5010 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-jztx5" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.976907 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.978521 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-wtwpn"] Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.979155 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-wtwpn" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.981303 5010 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-bmdtr" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.989145 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-b5ngd"] Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.998130 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-wtwpn"] Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.002264 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-bfc2c"] Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.003765 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-bfc2c" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.006165 5010 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-2vtp2" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.016839 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-bfc2c"] Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.031548 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvvcm\" (UniqueName: \"kubernetes.io/projected/26bf0193-c1b8-4018-a7e4-4429a4292dfb-kube-api-access-zvvcm\") pod \"cert-manager-webhook-687f57d79b-bfc2c\" (UID: \"26bf0193-c1b8-4018-a7e4-4429a4292dfb\") " pod="cert-manager/cert-manager-webhook-687f57d79b-bfc2c" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.031605 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcgtb\" (UniqueName: \"kubernetes.io/projected/7746ae6f-d9a0-4bba-a7bc-4920ed478ff4-kube-api-access-lcgtb\") pod \"cert-manager-858654f9db-wtwpn\" (UID: \"7746ae6f-d9a0-4bba-a7bc-4920ed478ff4\") " pod="cert-manager/cert-manager-858654f9db-wtwpn" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.031774 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcw44\" (UniqueName: \"kubernetes.io/projected/b9d02d93-3df5-4e4a-99b3-07329087dc2c-kube-api-access-wcw44\") pod \"cert-manager-cainjector-cf98fcc89-b5ngd\" (UID: \"b9d02d93-3df5-4e4a-99b3-07329087dc2c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-b5ngd" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.133486 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvvcm\" (UniqueName: \"kubernetes.io/projected/26bf0193-c1b8-4018-a7e4-4429a4292dfb-kube-api-access-zvvcm\") pod \"cert-manager-webhook-687f57d79b-bfc2c\" (UID: \"26bf0193-c1b8-4018-a7e4-4429a4292dfb\") " pod="cert-manager/cert-manager-webhook-687f57d79b-bfc2c" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.133558 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcgtb\" (UniqueName: \"kubernetes.io/projected/7746ae6f-d9a0-4bba-a7bc-4920ed478ff4-kube-api-access-lcgtb\") pod \"cert-manager-858654f9db-wtwpn\" (UID: \"7746ae6f-d9a0-4bba-a7bc-4920ed478ff4\") " pod="cert-manager/cert-manager-858654f9db-wtwpn" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.133624 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcw44\" (UniqueName: \"kubernetes.io/projected/b9d02d93-3df5-4e4a-99b3-07329087dc2c-kube-api-access-wcw44\") pod \"cert-manager-cainjector-cf98fcc89-b5ngd\" (UID: \"b9d02d93-3df5-4e4a-99b3-07329087dc2c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-b5ngd" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.154844 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcgtb\" (UniqueName: \"kubernetes.io/projected/7746ae6f-d9a0-4bba-a7bc-4920ed478ff4-kube-api-access-lcgtb\") pod \"cert-manager-858654f9db-wtwpn\" (UID: \"7746ae6f-d9a0-4bba-a7bc-4920ed478ff4\") " pod="cert-manager/cert-manager-858654f9db-wtwpn" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.154977 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvvcm\" (UniqueName: \"kubernetes.io/projected/26bf0193-c1b8-4018-a7e4-4429a4292dfb-kube-api-access-zvvcm\") pod \"cert-manager-webhook-687f57d79b-bfc2c\" (UID: \"26bf0193-c1b8-4018-a7e4-4429a4292dfb\") " pod="cert-manager/cert-manager-webhook-687f57d79b-bfc2c" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.157774 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcw44\" (UniqueName: \"kubernetes.io/projected/b9d02d93-3df5-4e4a-99b3-07329087dc2c-kube-api-access-wcw44\") pod \"cert-manager-cainjector-cf98fcc89-b5ngd\" (UID: \"b9d02d93-3df5-4e4a-99b3-07329087dc2c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-b5ngd" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.299422 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-b5ngd" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.313560 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-wtwpn" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.324581 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-bfc2c" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.510621 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-b5ngd"] Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.561946 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-wtwpn"] Feb 03 10:16:48 crc kubenswrapper[5010]: W0203 10:16:48.564101 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7746ae6f_d9a0_4bba_a7bc_4920ed478ff4.slice/crio-1a6b2fff6c2c877f9faaba2e7766850766fc6b249d477de0cfa169d4e843e012 WatchSource:0}: Error finding container 1a6b2fff6c2c877f9faaba2e7766850766fc6b249d477de0cfa169d4e843e012: Status 404 returned error can't find the container with id 1a6b2fff6c2c877f9faaba2e7766850766fc6b249d477de0cfa169d4e843e012 Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.784198 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-bfc2c"] Feb 03 10:16:48 crc kubenswrapper[5010]: W0203 10:16:48.786740 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26bf0193_c1b8_4018_a7e4_4429a4292dfb.slice/crio-a29dd4f000f1c35a47352aaab15731442e114bbaa34a4c67674d2948fb1a296a WatchSource:0}: Error finding container a29dd4f000f1c35a47352aaab15731442e114bbaa34a4c67674d2948fb1a296a: Status 404 returned error can't find the container with id a29dd4f000f1c35a47352aaab15731442e114bbaa34a4c67674d2948fb1a296a Feb 03 10:16:49 crc kubenswrapper[5010]: I0203 10:16:49.408323 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-wtwpn" event={"ID":"7746ae6f-d9a0-4bba-a7bc-4920ed478ff4","Type":"ContainerStarted","Data":"1a6b2fff6c2c877f9faaba2e7766850766fc6b249d477de0cfa169d4e843e012"} Feb 03 10:16:49 crc kubenswrapper[5010]: I0203 10:16:49.410880 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-bfc2c" event={"ID":"26bf0193-c1b8-4018-a7e4-4429a4292dfb","Type":"ContainerStarted","Data":"a29dd4f000f1c35a47352aaab15731442e114bbaa34a4c67674d2948fb1a296a"} Feb 03 10:16:49 crc kubenswrapper[5010]: I0203 10:16:49.412710 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-b5ngd" event={"ID":"b9d02d93-3df5-4e4a-99b3-07329087dc2c","Type":"ContainerStarted","Data":"2cddbddc0228cef92a4671f6daa25b6d3b74e64583cf8aa6c4e62bacce552dbc"} Feb 03 10:16:53 crc kubenswrapper[5010]: I0203 10:16:53.437807 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-b5ngd" event={"ID":"b9d02d93-3df5-4e4a-99b3-07329087dc2c","Type":"ContainerStarted","Data":"436ff1c500f0d5f50c199f3323f28bb5ed29b2ccdcc4fdd70509225c7c1e56c3"} Feb 03 10:16:53 crc kubenswrapper[5010]: I0203 10:16:53.439931 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-wtwpn" event={"ID":"7746ae6f-d9a0-4bba-a7bc-4920ed478ff4","Type":"ContainerStarted","Data":"0bf8d1d6cf91e2f16e9cad3a294971e83cd58c3cd0109b077649ab3f47ecd540"} Feb 03 10:16:53 crc kubenswrapper[5010]: I0203 10:16:53.441275 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-bfc2c" event={"ID":"26bf0193-c1b8-4018-a7e4-4429a4292dfb","Type":"ContainerStarted","Data":"c71972428c6cfe55c1f0ecb7037993e0707efe5fe272aecb60ca9f4cecaee590"} Feb 03 10:16:53 crc kubenswrapper[5010]: I0203 10:16:53.441410 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-bfc2c" Feb 03 10:16:53 crc kubenswrapper[5010]: I0203 10:16:53.458300 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-b5ngd" podStartSLOduration=2.722558656 podStartE2EDuration="6.458281197s" podCreationTimestamp="2026-02-03 10:16:47 +0000 UTC" firstStartedPulling="2026-02-03 10:16:48.518449594 +0000 UTC m=+878.674425723" lastFinishedPulling="2026-02-03 10:16:52.254172135 +0000 UTC m=+882.410148264" observedRunningTime="2026-02-03 10:16:53.451619142 +0000 UTC m=+883.607595311" watchObservedRunningTime="2026-02-03 10:16:53.458281197 +0000 UTC m=+883.614257336" Feb 03 10:16:53 crc kubenswrapper[5010]: I0203 10:16:53.480036 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-wtwpn" podStartSLOduration=2.783576072 podStartE2EDuration="6.480018857s" podCreationTimestamp="2026-02-03 10:16:47 +0000 UTC" firstStartedPulling="2026-02-03 10:16:48.565969974 +0000 UTC m=+878.721946103" lastFinishedPulling="2026-02-03 10:16:52.262412749 +0000 UTC m=+882.418388888" observedRunningTime="2026-02-03 10:16:53.477607687 +0000 UTC m=+883.633583826" watchObservedRunningTime="2026-02-03 10:16:53.480018857 +0000 UTC m=+883.635994986" Feb 03 10:16:53 crc kubenswrapper[5010]: I0203 10:16:53.495516 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-bfc2c" podStartSLOduration=3.022764832 podStartE2EDuration="6.495492971s" podCreationTimestamp="2026-02-03 10:16:47 +0000 UTC" firstStartedPulling="2026-02-03 10:16:48.78886788 +0000 UTC m=+878.944844009" lastFinishedPulling="2026-02-03 10:16:52.261596019 +0000 UTC m=+882.417572148" observedRunningTime="2026-02-03 10:16:53.492239501 +0000 UTC m=+883.648215630" watchObservedRunningTime="2026-02-03 10:16:53.495492971 +0000 UTC m=+883.651469120" Feb 03 10:16:56 crc kubenswrapper[5010]: I0203 10:16:56.757160 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-68p7p"] Feb 03 10:16:56 crc kubenswrapper[5010]: I0203 10:16:56.757802 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovn-controller" containerID="cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf" gracePeriod=30 Feb 03 10:16:56 crc kubenswrapper[5010]: I0203 10:16:56.757855 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="nbdb" containerID="cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7" gracePeriod=30 Feb 03 10:16:56 crc kubenswrapper[5010]: I0203 10:16:56.757938 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="kube-rbac-proxy-node" containerID="cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3" gracePeriod=30 Feb 03 10:16:56 crc kubenswrapper[5010]: I0203 10:16:56.757976 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919" gracePeriod=30 Feb 03 10:16:56 crc kubenswrapper[5010]: I0203 10:16:56.757964 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="sbdb" containerID="cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e" gracePeriod=30 Feb 03 10:16:56 crc kubenswrapper[5010]: I0203 10:16:56.758018 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovn-acl-logging" containerID="cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142" gracePeriod=30 Feb 03 10:16:56 crc kubenswrapper[5010]: I0203 10:16:56.758153 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="northd" containerID="cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b" gracePeriod=30 Feb 03 10:16:56 crc kubenswrapper[5010]: I0203 10:16:56.795407 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" containerID="cri-o://bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a" gracePeriod=30 Feb 03 10:16:56 crc kubenswrapper[5010]: E0203 10:16:56.905657 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 03 10:16:56 crc kubenswrapper[5010]: E0203 10:16:56.906095 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 03 10:16:56 crc kubenswrapper[5010]: E0203 10:16:56.912483 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 03 10:16:56 crc kubenswrapper[5010]: E0203 10:16:56.912530 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 03 10:16:56 crc kubenswrapper[5010]: E0203 10:16:56.914102 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 03 10:16:56 crc kubenswrapper[5010]: E0203 10:16:56.914130 5010 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="sbdb" Feb 03 10:16:56 crc kubenswrapper[5010]: E0203 10:16:56.914183 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 03 10:16:56 crc kubenswrapper[5010]: E0203 10:16:56.914196 5010 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="nbdb" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.099341 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/3.log" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.101715 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovn-acl-logging/0.log" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.102152 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovn-controller/0.log" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.102553 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.155156 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dx6zw"] Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.155393 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="kubecfg-setup" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.155409 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="kubecfg-setup" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.155419 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="nbdb" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.155426 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="nbdb" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.155435 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="sbdb" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.155442 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="sbdb" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.155451 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.155457 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.155465 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.155471 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.155788 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="northd" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.155902 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="northd" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.155918 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.155925 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.155940 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovn-acl-logging" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.155946 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovn-acl-logging" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.155963 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="kube-rbac-proxy-ovn-metrics" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.155971 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="kube-rbac-proxy-ovn-metrics" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.155992 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.155998 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.156012 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovn-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156019 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovn-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.156036 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="kube-rbac-proxy-node" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156042 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="kube-rbac-proxy-node" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156836 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="kube-rbac-proxy-ovn-metrics" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156881 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156895 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156908 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156919 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovn-acl-logging" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156932 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156939 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="sbdb" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156946 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovn-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156958 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156966 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="northd" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156976 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="kube-rbac-proxy-node" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156984 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="nbdb" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.157651 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.157676 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.165544 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256672 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-cni-netd\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256749 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-var-lib-openvswitch\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256780 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-run-netns\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256804 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-slash\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256844 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-ovn\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256845 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256885 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-slash" (OuterVolumeSpecName: "host-slash") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256889 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256863 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-systemd-units\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256853 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256846 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256926 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256959 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-openvswitch\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257012 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-etc-openvswitch\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257027 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-cni-bin\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257045 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-log-socket\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257076 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xwzz\" (UniqueName: \"kubernetes.io/projected/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-kube-api-access-2xwzz\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257091 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257120 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovn-node-metrics-cert\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257142 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-env-overrides\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257159 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovnkube-script-lib\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257180 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-run-ovn-kubernetes\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257200 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-node-log\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257262 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-kubelet\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257282 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovnkube-config\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257300 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-systemd\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257482 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-cni-bin\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257509 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-slash\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257527 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-run-openvswitch\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257561 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-run-netns\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257586 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-run-ovn\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257600 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmtnq\" (UniqueName: \"kubernetes.io/projected/44b9089e-c580-4353-9e4b-04a3a270e59f-kube-api-access-pmtnq\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257616 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-node-log\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257635 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-cni-netd\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257659 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44b9089e-c580-4353-9e4b-04a3a270e59f-env-overrides\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257676 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44b9089e-c580-4353-9e4b-04a3a270e59f-ovnkube-config\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257714 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/44b9089e-c580-4353-9e4b-04a3a270e59f-ovnkube-script-lib\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257742 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-var-lib-openvswitch\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257757 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-systemd-units\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257775 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257795 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-kubelet\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257819 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-run-ovn-kubernetes\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257832 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44b9089e-c580-4353-9e4b-04a3a270e59f-ovn-node-metrics-cert\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257854 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-run-systemd\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257899 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-etc-openvswitch\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257043 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257069 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257087 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257934 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-log-socket\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257980 5010 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257995 5010 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.258006 5010 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.258016 5010 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-slash\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.258027 5010 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.258039 5010 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.258050 5010 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.258060 5010 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.258071 5010 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257110 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-log-socket" (OuterVolumeSpecName: "log-socket") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257134 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.258603 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.258927 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.259202 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.259257 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.259282 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-node-log" (OuterVolumeSpecName: "node-log") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.259301 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.264481 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.264842 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-kube-api-access-2xwzz" (OuterVolumeSpecName: "kube-api-access-2xwzz") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "kube-api-access-2xwzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.273031 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359141 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-etc-openvswitch\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359206 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-log-socket\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359247 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-cni-bin\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359270 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-slash\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359283 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-etc-openvswitch\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359291 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-run-openvswitch\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359343 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-slash\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359349 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-log-socket\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359378 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-run-netns\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359356 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-run-netns\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359421 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-run-ovn\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359443 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmtnq\" (UniqueName: \"kubernetes.io/projected/44b9089e-c580-4353-9e4b-04a3a270e59f-kube-api-access-pmtnq\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359464 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-node-log\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359486 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-cni-netd\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359512 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44b9089e-c580-4353-9e4b-04a3a270e59f-env-overrides\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359533 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44b9089e-c580-4353-9e4b-04a3a270e59f-ovnkube-config\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359568 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/44b9089e-c580-4353-9e4b-04a3a270e59f-ovnkube-script-lib\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359612 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-systemd-units\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359633 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-var-lib-openvswitch\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359656 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359713 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-kubelet\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359739 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-run-ovn-kubernetes\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359761 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44b9089e-c580-4353-9e4b-04a3a270e59f-ovn-node-metrics-cert\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359785 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-run-systemd\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359835 5010 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-log-socket\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359849 5010 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359862 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xwzz\" (UniqueName: \"kubernetes.io/projected/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-kube-api-access-2xwzz\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359874 5010 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359886 5010 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359898 5010 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359911 5010 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359923 5010 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-node-log\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359935 5010 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359945 5010 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359956 5010 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359988 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-run-systemd\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359321 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-run-openvswitch\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.360028 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-run-ovn\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359349 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-cni-bin\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.360378 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-var-lib-openvswitch\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.360378 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-systemd-units\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.360416 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.360428 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-node-log\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.360454 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-kubelet\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.360456 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-cni-netd\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.360560 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-run-ovn-kubernetes\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.361013 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44b9089e-c580-4353-9e4b-04a3a270e59f-env-overrides\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.361234 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44b9089e-c580-4353-9e4b-04a3a270e59f-ovnkube-config\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.361303 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/44b9089e-c580-4353-9e4b-04a3a270e59f-ovnkube-script-lib\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.364360 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44b9089e-c580-4353-9e4b-04a3a270e59f-ovn-node-metrics-cert\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.377695 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmtnq\" (UniqueName: \"kubernetes.io/projected/44b9089e-c580-4353-9e4b-04a3a270e59f-kube-api-access-pmtnq\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.464494 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f5tpq_8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef/kube-multus/2.log" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.464918 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f5tpq_8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef/kube-multus/1.log" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.464957 5010 generic.go:334] "Generic (PLEG): container finished" podID="8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef" containerID="350b279aaf7efa7dad21bc0c20fa082b7c655a83b208a5091e614ce3efe34ce4" exitCode=2 Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.465014 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f5tpq" event={"ID":"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef","Type":"ContainerDied","Data":"350b279aaf7efa7dad21bc0c20fa082b7c655a83b208a5091e614ce3efe34ce4"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.465047 5010 scope.go:117] "RemoveContainer" containerID="d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.465495 5010 scope.go:117] "RemoveContainer" containerID="350b279aaf7efa7dad21bc0c20fa082b7c655a83b208a5091e614ce3efe34ce4" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.468738 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/3.log" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.473826 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovn-acl-logging/0.log" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.474396 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovn-controller/0.log" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.475692 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a" exitCode=0 Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.475725 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e" exitCode=0 Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.475738 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7" exitCode=0 Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.475747 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b" exitCode=0 Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.475741 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.475795 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.475800 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.475977 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476008 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476027 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.475757 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919" exitCode=0 Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476071 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3" exitCode=0 Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476090 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142" exitCode=143 Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476100 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf" exitCode=143 Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476161 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476205 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476241 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476248 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476255 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476458 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476467 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476474 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476481 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476488 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476495 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476511 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476526 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476536 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476544 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476551 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476558 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476564 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476571 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476577 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476586 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476593 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476605 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476618 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476626 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476633 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476639 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476646 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476653 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476660 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476667 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476682 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476690 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476700 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"397d6ad2bb41a4df9c0dc30fd14d52b9e67cbf17ccd52dacef60dc2182647ba3"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476715 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476724 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476732 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476740 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476747 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476754 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476761 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476768 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476775 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476782 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.488975 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.509636 5010 scope.go:117] "RemoveContainer" containerID="bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.516712 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-68p7p"] Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.521647 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-68p7p"] Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.533704 5010 scope.go:117] "RemoveContainer" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.561101 5010 scope.go:117] "RemoveContainer" containerID="1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.577914 5010 scope.go:117] "RemoveContainer" containerID="6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.598015 5010 scope.go:117] "RemoveContainer" containerID="24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.612197 5010 scope.go:117] "RemoveContainer" containerID="12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.626428 5010 scope.go:117] "RemoveContainer" containerID="76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.700979 5010 scope.go:117] "RemoveContainer" containerID="8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.715659 5010 scope.go:117] "RemoveContainer" containerID="f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.751326 5010 scope.go:117] "RemoveContainer" containerID="5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.765904 5010 scope.go:117] "RemoveContainer" containerID="bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.766260 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a\": container with ID starting with bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a not found: ID does not exist" containerID="bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.766305 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a"} err="failed to get container status \"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a\": rpc error: code = NotFound desc = could not find container \"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a\": container with ID starting with bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.766343 5010 scope.go:117] "RemoveContainer" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.766642 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\": container with ID starting with ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db not found: ID does not exist" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.766673 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db"} err="failed to get container status \"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\": rpc error: code = NotFound desc = could not find container \"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\": container with ID starting with ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.766698 5010 scope.go:117] "RemoveContainer" containerID="1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.766915 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\": container with ID starting with 1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e not found: ID does not exist" containerID="1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.766943 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e"} err="failed to get container status \"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\": rpc error: code = NotFound desc = could not find container \"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\": container with ID starting with 1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.766962 5010 scope.go:117] "RemoveContainer" containerID="6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.767134 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\": container with ID starting with 6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7 not found: ID does not exist" containerID="6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.767153 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7"} err="failed to get container status \"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\": rpc error: code = NotFound desc = could not find container \"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\": container with ID starting with 6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.767166 5010 scope.go:117] "RemoveContainer" containerID="24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.767380 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\": container with ID starting with 24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b not found: ID does not exist" containerID="24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.767406 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b"} err="failed to get container status \"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\": rpc error: code = NotFound desc = could not find container \"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\": container with ID starting with 24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.767420 5010 scope.go:117] "RemoveContainer" containerID="12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.767649 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\": container with ID starting with 12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919 not found: ID does not exist" containerID="12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.767670 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919"} err="failed to get container status \"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\": rpc error: code = NotFound desc = could not find container \"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\": container with ID starting with 12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.767683 5010 scope.go:117] "RemoveContainer" containerID="76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.767874 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\": container with ID starting with 76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3 not found: ID does not exist" containerID="76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.767895 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3"} err="failed to get container status \"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\": rpc error: code = NotFound desc = could not find container \"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\": container with ID starting with 76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.767910 5010 scope.go:117] "RemoveContainer" containerID="8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.768062 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\": container with ID starting with 8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142 not found: ID does not exist" containerID="8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.768082 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142"} err="failed to get container status \"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\": rpc error: code = NotFound desc = could not find container \"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\": container with ID starting with 8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.768094 5010 scope.go:117] "RemoveContainer" containerID="f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.768294 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\": container with ID starting with f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf not found: ID does not exist" containerID="f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.768314 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf"} err="failed to get container status \"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\": rpc error: code = NotFound desc = could not find container \"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\": container with ID starting with f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.768325 5010 scope.go:117] "RemoveContainer" containerID="5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.768573 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\": container with ID starting with 5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53 not found: ID does not exist" containerID="5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.768628 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53"} err="failed to get container status \"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\": rpc error: code = NotFound desc = could not find container \"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\": container with ID starting with 5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.768668 5010 scope.go:117] "RemoveContainer" containerID="bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.768897 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a"} err="failed to get container status \"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a\": rpc error: code = NotFound desc = could not find container \"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a\": container with ID starting with bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.768916 5010 scope.go:117] "RemoveContainer" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.769104 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db"} err="failed to get container status \"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\": rpc error: code = NotFound desc = could not find container \"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\": container with ID starting with ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.769130 5010 scope.go:117] "RemoveContainer" containerID="1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.769417 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e"} err="failed to get container status \"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\": rpc error: code = NotFound desc = could not find container \"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\": container with ID starting with 1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.769445 5010 scope.go:117] "RemoveContainer" containerID="6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.769652 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7"} err="failed to get container status \"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\": rpc error: code = NotFound desc = could not find container \"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\": container with ID starting with 6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.769674 5010 scope.go:117] "RemoveContainer" containerID="24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.769863 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b"} err="failed to get container status \"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\": rpc error: code = NotFound desc = could not find container \"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\": container with ID starting with 24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.769896 5010 scope.go:117] "RemoveContainer" containerID="12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.770073 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919"} err="failed to get container status \"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\": rpc error: code = NotFound desc = could not find container \"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\": container with ID starting with 12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.770091 5010 scope.go:117] "RemoveContainer" containerID="76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.770301 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3"} err="failed to get container status \"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\": rpc error: code = NotFound desc = could not find container \"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\": container with ID starting with 76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.770329 5010 scope.go:117] "RemoveContainer" containerID="8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.770511 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142"} err="failed to get container status \"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\": rpc error: code = NotFound desc = could not find container \"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\": container with ID starting with 8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.770528 5010 scope.go:117] "RemoveContainer" containerID="f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.770704 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf"} err="failed to get container status \"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\": rpc error: code = NotFound desc = could not find container \"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\": container with ID starting with f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.770729 5010 scope.go:117] "RemoveContainer" containerID="5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.770905 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53"} err="failed to get container status \"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\": rpc error: code = NotFound desc = could not find container \"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\": container with ID starting with 5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.770926 5010 scope.go:117] "RemoveContainer" containerID="bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.771115 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a"} err="failed to get container status \"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a\": rpc error: code = NotFound desc = could not find container \"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a\": container with ID starting with bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.771142 5010 scope.go:117] "RemoveContainer" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.771347 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db"} err="failed to get container status \"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\": rpc error: code = NotFound desc = could not find container \"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\": container with ID starting with ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.771366 5010 scope.go:117] "RemoveContainer" containerID="1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.771588 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e"} err="failed to get container status \"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\": rpc error: code = NotFound desc = could not find container \"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\": container with ID starting with 1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.771623 5010 scope.go:117] "RemoveContainer" containerID="6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.771843 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7"} err="failed to get container status \"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\": rpc error: code = NotFound desc = could not find container \"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\": container with ID starting with 6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.771872 5010 scope.go:117] "RemoveContainer" containerID="24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.772070 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b"} err="failed to get container status \"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\": rpc error: code = NotFound desc = could not find container \"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\": container with ID starting with 24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.772093 5010 scope.go:117] "RemoveContainer" containerID="12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.772274 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919"} err="failed to get container status \"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\": rpc error: code = NotFound desc = could not find container \"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\": container with ID starting with 12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.772298 5010 scope.go:117] "RemoveContainer" containerID="76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.772499 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3"} err="failed to get container status \"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\": rpc error: code = NotFound desc = could not find container \"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\": container with ID starting with 76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.772518 5010 scope.go:117] "RemoveContainer" containerID="8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.772702 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142"} err="failed to get container status \"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\": rpc error: code = NotFound desc = could not find container \"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\": container with ID starting with 8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.772727 5010 scope.go:117] "RemoveContainer" containerID="f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.772925 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf"} err="failed to get container status \"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\": rpc error: code = NotFound desc = could not find container \"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\": container with ID starting with f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.772946 5010 scope.go:117] "RemoveContainer" containerID="5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.773156 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53"} err="failed to get container status \"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\": rpc error: code = NotFound desc = could not find container \"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\": container with ID starting with 5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.773183 5010 scope.go:117] "RemoveContainer" containerID="bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.773444 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a"} err="failed to get container status \"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a\": rpc error: code = NotFound desc = could not find container \"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a\": container with ID starting with bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.773465 5010 scope.go:117] "RemoveContainer" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.773634 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db"} err="failed to get container status \"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\": rpc error: code = NotFound desc = could not find container \"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\": container with ID starting with ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.773654 5010 scope.go:117] "RemoveContainer" containerID="1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.773834 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e"} err="failed to get container status \"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\": rpc error: code = NotFound desc = could not find container \"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\": container with ID starting with 1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.773853 5010 scope.go:117] "RemoveContainer" containerID="6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.774010 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7"} err="failed to get container status \"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\": rpc error: code = NotFound desc = could not find container \"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\": container with ID starting with 6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.774027 5010 scope.go:117] "RemoveContainer" containerID="24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.774182 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b"} err="failed to get container status \"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\": rpc error: code = NotFound desc = could not find container \"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\": container with ID starting with 24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.774199 5010 scope.go:117] "RemoveContainer" containerID="12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.774527 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919"} err="failed to get container status \"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\": rpc error: code = NotFound desc = could not find container \"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\": container with ID starting with 12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.774547 5010 scope.go:117] "RemoveContainer" containerID="76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.775109 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3"} err="failed to get container status \"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\": rpc error: code = NotFound desc = could not find container \"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\": container with ID starting with 76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.775141 5010 scope.go:117] "RemoveContainer" containerID="8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.775380 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142"} err="failed to get container status \"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\": rpc error: code = NotFound desc = could not find container \"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\": container with ID starting with 8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.775410 5010 scope.go:117] "RemoveContainer" containerID="f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.775627 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf"} err="failed to get container status \"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\": rpc error: code = NotFound desc = could not find container \"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\": container with ID starting with f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.775648 5010 scope.go:117] "RemoveContainer" containerID="5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.775849 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53"} err="failed to get container status \"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\": rpc error: code = NotFound desc = could not find container \"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\": container with ID starting with 5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53 not found: ID does not exist" Feb 03 10:16:58 crc kubenswrapper[5010]: I0203 10:16:58.326640 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-bfc2c" Feb 03 10:16:58 crc kubenswrapper[5010]: I0203 10:16:58.482956 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f5tpq_8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef/kube-multus/2.log" Feb 03 10:16:58 crc kubenswrapper[5010]: I0203 10:16:58.483064 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f5tpq" event={"ID":"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef","Type":"ContainerStarted","Data":"572bea666e8d94e55589ce0ee754fcd331cf7f3eb1bcbaf5139a1e8bb58fe555"} Feb 03 10:16:58 crc kubenswrapper[5010]: I0203 10:16:58.488453 5010 generic.go:334] "Generic (PLEG): container finished" podID="44b9089e-c580-4353-9e4b-04a3a270e59f" containerID="fa2da3302ee5fa1d268ceb3a598a189ac7d6e299c97d6dee08f81aa1fb56eb01" exitCode=0 Feb 03 10:16:58 crc kubenswrapper[5010]: I0203 10:16:58.488527 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" event={"ID":"44b9089e-c580-4353-9e4b-04a3a270e59f","Type":"ContainerDied","Data":"fa2da3302ee5fa1d268ceb3a598a189ac7d6e299c97d6dee08f81aa1fb56eb01"} Feb 03 10:16:58 crc kubenswrapper[5010]: I0203 10:16:58.488566 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" event={"ID":"44b9089e-c580-4353-9e4b-04a3a270e59f","Type":"ContainerStarted","Data":"1fa76f159b9c052306233546e5e3cd8d81de34f2a2da7a289615528f73058fbe"} Feb 03 10:16:58 crc kubenswrapper[5010]: I0203 10:16:58.510431 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" path="/var/lib/kubelet/pods/afbb630a-0dee-4c9c-90ff-cb710b9da3f2/volumes" Feb 03 10:16:59 crc kubenswrapper[5010]: I0203 10:16:59.498420 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" event={"ID":"44b9089e-c580-4353-9e4b-04a3a270e59f","Type":"ContainerStarted","Data":"bca9e630bba9adf10225d1b40d115a3b086a1ff3fdd142b899c35dff3f4a914d"} Feb 03 10:16:59 crc kubenswrapper[5010]: I0203 10:16:59.498746 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" event={"ID":"44b9089e-c580-4353-9e4b-04a3a270e59f","Type":"ContainerStarted","Data":"92d72cb0f194ae589805d49dd0b68ceec7415daabad163e2247ec0a73716dc5c"} Feb 03 10:16:59 crc kubenswrapper[5010]: I0203 10:16:59.498769 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" event={"ID":"44b9089e-c580-4353-9e4b-04a3a270e59f","Type":"ContainerStarted","Data":"4a12e96fb4ee57376c3d040f69a13b895c682aed4f5028634335c518c51c8f0c"} Feb 03 10:16:59 crc kubenswrapper[5010]: I0203 10:16:59.498780 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" event={"ID":"44b9089e-c580-4353-9e4b-04a3a270e59f","Type":"ContainerStarted","Data":"b79f479f381497cc6d07c190dea9414670d2433fe7906dd0f406042adace4073"} Feb 03 10:16:59 crc kubenswrapper[5010]: I0203 10:16:59.498789 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" event={"ID":"44b9089e-c580-4353-9e4b-04a3a270e59f","Type":"ContainerStarted","Data":"cca2e8459522efb134428e3e9d01437c0c1225119fa23540fa5134fad3cb23f8"} Feb 03 10:16:59 crc kubenswrapper[5010]: I0203 10:16:59.498798 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" event={"ID":"44b9089e-c580-4353-9e4b-04a3a270e59f","Type":"ContainerStarted","Data":"117d2f3555d10e53e86cfbaa4ed8c90b1e5a3f5dec1921952630fad01f344b5e"} Feb 03 10:17:01 crc kubenswrapper[5010]: I0203 10:17:01.510910 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" event={"ID":"44b9089e-c580-4353-9e4b-04a3a270e59f","Type":"ContainerStarted","Data":"b32863f8ff6cb7f7f2e794c0b071138811c9d86d4893ef7d9c37067a9f430006"} Feb 03 10:17:04 crc kubenswrapper[5010]: I0203 10:17:04.532442 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" event={"ID":"44b9089e-c580-4353-9e4b-04a3a270e59f","Type":"ContainerStarted","Data":"56cb1f51ac5d26eb76ba983dc58bfc8b2bed77b234f386c5830199051d68ed79"} Feb 03 10:17:04 crc kubenswrapper[5010]: I0203 10:17:04.533024 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:17:04 crc kubenswrapper[5010]: I0203 10:17:04.533039 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:17:04 crc kubenswrapper[5010]: I0203 10:17:04.533051 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:17:04 crc kubenswrapper[5010]: I0203 10:17:04.568131 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" podStartSLOduration=7.568109369 podStartE2EDuration="7.568109369s" podCreationTimestamp="2026-02-03 10:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:17:04.564451928 +0000 UTC m=+894.720428067" watchObservedRunningTime="2026-02-03 10:17:04.568109369 +0000 UTC m=+894.724085498" Feb 03 10:17:04 crc kubenswrapper[5010]: I0203 10:17:04.575133 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:17:04 crc kubenswrapper[5010]: I0203 10:17:04.582074 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:17:16 crc kubenswrapper[5010]: I0203 10:17:16.390429 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:17:16 crc kubenswrapper[5010]: I0203 10:17:16.391263 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:17:27 crc kubenswrapper[5010]: I0203 10:17:27.514033 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.104714 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl"] Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.106139 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.108630 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.116508 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl"] Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.204409 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl\" (UID: \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.204480 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ljf9\" (UniqueName: \"kubernetes.io/projected/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-kube-api-access-5ljf9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl\" (UID: \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.204507 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl\" (UID: \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.305929 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl\" (UID: \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.305978 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ljf9\" (UniqueName: \"kubernetes.io/projected/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-kube-api-access-5ljf9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl\" (UID: \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.305998 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl\" (UID: \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.306869 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl\" (UID: \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.307023 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl\" (UID: \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.327329 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ljf9\" (UniqueName: \"kubernetes.io/projected/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-kube-api-access-5ljf9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl\" (UID: \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.423652 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.589305 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl"] Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.731427 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" event={"ID":"a64fc313-0bcd-40df-a19f-052eb0d1ce8a","Type":"ContainerStarted","Data":"4ed2d000e5539e4f0f00f339331ba7863091489a20723c71752d5bc5ce0e5a04"} Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.731745 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" event={"ID":"a64fc313-0bcd-40df-a19f-052eb0d1ce8a","Type":"ContainerStarted","Data":"b9c5e242439c1a925e9e8a69b8c937e6e81018435fb3186bd47eec8937e184d4"} Feb 03 10:17:40 crc kubenswrapper[5010]: I0203 10:17:40.737782 5010 generic.go:334] "Generic (PLEG): container finished" podID="a64fc313-0bcd-40df-a19f-052eb0d1ce8a" containerID="4ed2d000e5539e4f0f00f339331ba7863091489a20723c71752d5bc5ce0e5a04" exitCode=0 Feb 03 10:17:40 crc kubenswrapper[5010]: I0203 10:17:40.737819 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" event={"ID":"a64fc313-0bcd-40df-a19f-052eb0d1ce8a","Type":"ContainerDied","Data":"4ed2d000e5539e4f0f00f339331ba7863091489a20723c71752d5bc5ce0e5a04"} Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.411497 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jw95h"] Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.413411 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.424750 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jw95h"] Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.531136 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-utilities\") pod \"redhat-operators-jw95h\" (UID: \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\") " pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.531202 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q58z\" (UniqueName: \"kubernetes.io/projected/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-kube-api-access-4q58z\") pod \"redhat-operators-jw95h\" (UID: \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\") " pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.531290 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-catalog-content\") pod \"redhat-operators-jw95h\" (UID: \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\") " pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.632098 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-catalog-content\") pod \"redhat-operators-jw95h\" (UID: \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\") " pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.632146 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-utilities\") pod \"redhat-operators-jw95h\" (UID: \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\") " pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.632174 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q58z\" (UniqueName: \"kubernetes.io/projected/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-kube-api-access-4q58z\") pod \"redhat-operators-jw95h\" (UID: \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\") " pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.633663 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-catalog-content\") pod \"redhat-operators-jw95h\" (UID: \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\") " pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.633781 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-utilities\") pod \"redhat-operators-jw95h\" (UID: \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\") " pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.665805 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q58z\" (UniqueName: \"kubernetes.io/projected/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-kube-api-access-4q58z\") pod \"redhat-operators-jw95h\" (UID: \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\") " pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.733337 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.996474 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jw95h"] Feb 03 10:17:42 crc kubenswrapper[5010]: W0203 10:17:42.012040 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda595e8ea_8e1d_44c1_9ee0_0e40fa3a0f96.slice/crio-60580599bfa6e867910c3854625eecb82cba759cc65d13303775a63e7e0ee852 WatchSource:0}: Error finding container 60580599bfa6e867910c3854625eecb82cba759cc65d13303775a63e7e0ee852: Status 404 returned error can't find the container with id 60580599bfa6e867910c3854625eecb82cba759cc65d13303775a63e7e0ee852 Feb 03 10:17:42 crc kubenswrapper[5010]: I0203 10:17:42.750254 5010 generic.go:334] "Generic (PLEG): container finished" podID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" containerID="443709295bdaac31497a6cc77ad2bcc3071794d791e0635c510f6ba7c30b30a9" exitCode=0 Feb 03 10:17:42 crc kubenswrapper[5010]: I0203 10:17:42.750340 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw95h" event={"ID":"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96","Type":"ContainerDied","Data":"443709295bdaac31497a6cc77ad2bcc3071794d791e0635c510f6ba7c30b30a9"} Feb 03 10:17:42 crc kubenswrapper[5010]: I0203 10:17:42.750633 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw95h" event={"ID":"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96","Type":"ContainerStarted","Data":"60580599bfa6e867910c3854625eecb82cba759cc65d13303775a63e7e0ee852"} Feb 03 10:17:42 crc kubenswrapper[5010]: I0203 10:17:42.752965 5010 generic.go:334] "Generic (PLEG): container finished" podID="a64fc313-0bcd-40df-a19f-052eb0d1ce8a" containerID="77236826d76411acd09f4b6acbc2cbab98aaaed6120d41840fe09cf196c2066a" exitCode=0 Feb 03 10:17:42 crc kubenswrapper[5010]: I0203 10:17:42.753073 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" event={"ID":"a64fc313-0bcd-40df-a19f-052eb0d1ce8a","Type":"ContainerDied","Data":"77236826d76411acd09f4b6acbc2cbab98aaaed6120d41840fe09cf196c2066a"} Feb 03 10:17:43 crc kubenswrapper[5010]: I0203 10:17:43.759543 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw95h" event={"ID":"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96","Type":"ContainerStarted","Data":"3233b7a84639e8da2f401885f649b9998961cd9522c1b313c054b9fc5b07696c"} Feb 03 10:17:43 crc kubenswrapper[5010]: I0203 10:17:43.764918 5010 generic.go:334] "Generic (PLEG): container finished" podID="a64fc313-0bcd-40df-a19f-052eb0d1ce8a" containerID="288db0e960f4e0f01e04dc94840da4564bc08e4cfd6ccbf106dfad7054926599" exitCode=0 Feb 03 10:17:43 crc kubenswrapper[5010]: I0203 10:17:43.765182 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" event={"ID":"a64fc313-0bcd-40df-a19f-052eb0d1ce8a","Type":"ContainerDied","Data":"288db0e960f4e0f01e04dc94840da4564bc08e4cfd6ccbf106dfad7054926599"} Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.559593 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.755703 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-bundle\") pod \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\" (UID: \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\") " Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.755993 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-util\") pod \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\" (UID: \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\") " Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.756051 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ljf9\" (UniqueName: \"kubernetes.io/projected/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-kube-api-access-5ljf9\") pod \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\" (UID: \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\") " Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.762800 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-bundle" (OuterVolumeSpecName: "bundle") pod "a64fc313-0bcd-40df-a19f-052eb0d1ce8a" (UID: "a64fc313-0bcd-40df-a19f-052eb0d1ce8a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.784416 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" event={"ID":"a64fc313-0bcd-40df-a19f-052eb0d1ce8a","Type":"ContainerDied","Data":"b9c5e242439c1a925e9e8a69b8c937e6e81018435fb3186bd47eec8937e184d4"} Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.784489 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9c5e242439c1a925e9e8a69b8c937e6e81018435fb3186bd47eec8937e184d4" Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.784588 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.787992 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-util" (OuterVolumeSpecName: "util") pod "a64fc313-0bcd-40df-a19f-052eb0d1ce8a" (UID: "a64fc313-0bcd-40df-a19f-052eb0d1ce8a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.857082 5010 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.857117 5010 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-util\") on node \"crc\" DevicePath \"\"" Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.940933 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-kube-api-access-5ljf9" (OuterVolumeSpecName: "kube-api-access-5ljf9") pod "a64fc313-0bcd-40df-a19f-052eb0d1ce8a" (UID: "a64fc313-0bcd-40df-a19f-052eb0d1ce8a"). InnerVolumeSpecName "kube-api-access-5ljf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.958531 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ljf9\" (UniqueName: \"kubernetes.io/projected/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-kube-api-access-5ljf9\") on node \"crc\" DevicePath \"\"" Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.389907 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.389977 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.790673 5010 generic.go:334] "Generic (PLEG): container finished" podID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" containerID="3233b7a84639e8da2f401885f649b9998961cd9522c1b313c054b9fc5b07696c" exitCode=0 Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.790711 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw95h" event={"ID":"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96","Type":"ContainerDied","Data":"3233b7a84639e8da2f401885f649b9998961cd9522c1b313c054b9fc5b07696c"} Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.813467 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dk2xz"] Feb 03 10:17:46 crc kubenswrapper[5010]: E0203 10:17:46.814748 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64fc313-0bcd-40df-a19f-052eb0d1ce8a" containerName="extract" Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.814874 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64fc313-0bcd-40df-a19f-052eb0d1ce8a" containerName="extract" Feb 03 10:17:46 crc kubenswrapper[5010]: E0203 10:17:46.814951 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64fc313-0bcd-40df-a19f-052eb0d1ce8a" containerName="pull" Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.815003 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64fc313-0bcd-40df-a19f-052eb0d1ce8a" containerName="pull" Feb 03 10:17:46 crc kubenswrapper[5010]: E0203 10:17:46.815086 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64fc313-0bcd-40df-a19f-052eb0d1ce8a" containerName="util" Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.815138 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64fc313-0bcd-40df-a19f-052eb0d1ce8a" containerName="util" Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.815347 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="a64fc313-0bcd-40df-a19f-052eb0d1ce8a" containerName="extract" Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.816358 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.823328 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dk2xz"] Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.972512 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae42090-f4be-43c8-b0b1-90fe576195a3-catalog-content\") pod \"community-operators-dk2xz\" (UID: \"aae42090-f4be-43c8-b0b1-90fe576195a3\") " pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.972556 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae42090-f4be-43c8-b0b1-90fe576195a3-utilities\") pod \"community-operators-dk2xz\" (UID: \"aae42090-f4be-43c8-b0b1-90fe576195a3\") " pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.972579 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjwsm\" (UniqueName: \"kubernetes.io/projected/aae42090-f4be-43c8-b0b1-90fe576195a3-kube-api-access-rjwsm\") pod \"community-operators-dk2xz\" (UID: \"aae42090-f4be-43c8-b0b1-90fe576195a3\") " pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:47 crc kubenswrapper[5010]: I0203 10:17:47.073572 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae42090-f4be-43c8-b0b1-90fe576195a3-catalog-content\") pod \"community-operators-dk2xz\" (UID: \"aae42090-f4be-43c8-b0b1-90fe576195a3\") " pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:47 crc kubenswrapper[5010]: I0203 10:17:47.073621 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae42090-f4be-43c8-b0b1-90fe576195a3-utilities\") pod \"community-operators-dk2xz\" (UID: \"aae42090-f4be-43c8-b0b1-90fe576195a3\") " pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:47 crc kubenswrapper[5010]: I0203 10:17:47.073657 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjwsm\" (UniqueName: \"kubernetes.io/projected/aae42090-f4be-43c8-b0b1-90fe576195a3-kube-api-access-rjwsm\") pod \"community-operators-dk2xz\" (UID: \"aae42090-f4be-43c8-b0b1-90fe576195a3\") " pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:47 crc kubenswrapper[5010]: I0203 10:17:47.074465 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae42090-f4be-43c8-b0b1-90fe576195a3-catalog-content\") pod \"community-operators-dk2xz\" (UID: \"aae42090-f4be-43c8-b0b1-90fe576195a3\") " pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:47 crc kubenswrapper[5010]: I0203 10:17:47.074513 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae42090-f4be-43c8-b0b1-90fe576195a3-utilities\") pod \"community-operators-dk2xz\" (UID: \"aae42090-f4be-43c8-b0b1-90fe576195a3\") " pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:47 crc kubenswrapper[5010]: I0203 10:17:47.092141 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjwsm\" (UniqueName: \"kubernetes.io/projected/aae42090-f4be-43c8-b0b1-90fe576195a3-kube-api-access-rjwsm\") pod \"community-operators-dk2xz\" (UID: \"aae42090-f4be-43c8-b0b1-90fe576195a3\") " pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:47 crc kubenswrapper[5010]: I0203 10:17:47.135676 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:47 crc kubenswrapper[5010]: I0203 10:17:47.762326 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dk2xz"] Feb 03 10:17:47 crc kubenswrapper[5010]: W0203 10:17:47.768783 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaae42090_f4be_43c8_b0b1_90fe576195a3.slice/crio-a8bec8e2b56c771c7079c4cac54a1acdfd8e585a247992ddbbfe6031d2222fb8 WatchSource:0}: Error finding container a8bec8e2b56c771c7079c4cac54a1acdfd8e585a247992ddbbfe6031d2222fb8: Status 404 returned error can't find the container with id a8bec8e2b56c771c7079c4cac54a1acdfd8e585a247992ddbbfe6031d2222fb8 Feb 03 10:17:47 crc kubenswrapper[5010]: I0203 10:17:47.799644 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk2xz" event={"ID":"aae42090-f4be-43c8-b0b1-90fe576195a3","Type":"ContainerStarted","Data":"a8bec8e2b56c771c7079c4cac54a1acdfd8e585a247992ddbbfe6031d2222fb8"} Feb 03 10:17:47 crc kubenswrapper[5010]: I0203 10:17:47.802820 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw95h" event={"ID":"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96","Type":"ContainerStarted","Data":"c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981"} Feb 03 10:17:47 crc kubenswrapper[5010]: I0203 10:17:47.839162 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jw95h" podStartSLOduration=2.277285675 podStartE2EDuration="6.839147197s" podCreationTimestamp="2026-02-03 10:17:41 +0000 UTC" firstStartedPulling="2026-02-03 10:17:42.752302224 +0000 UTC m=+932.908278353" lastFinishedPulling="2026-02-03 10:17:47.314163746 +0000 UTC m=+937.470139875" observedRunningTime="2026-02-03 10:17:47.837496134 +0000 UTC m=+937.993472273" watchObservedRunningTime="2026-02-03 10:17:47.839147197 +0000 UTC m=+937.995123326" Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.196691 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-frs8s"] Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.197635 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-frs8s" Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.199660 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.199797 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-fwd79" Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.200044 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.256300 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-frs8s"] Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.292843 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-827bf\" (UniqueName: \"kubernetes.io/projected/e5c85e5b-ab19-414d-97e6-767b9e01f731-kube-api-access-827bf\") pod \"nmstate-operator-646758c888-frs8s\" (UID: \"e5c85e5b-ab19-414d-97e6-767b9e01f731\") " pod="openshift-nmstate/nmstate-operator-646758c888-frs8s" Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.393586 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-827bf\" (UniqueName: \"kubernetes.io/projected/e5c85e5b-ab19-414d-97e6-767b9e01f731-kube-api-access-827bf\") pod \"nmstate-operator-646758c888-frs8s\" (UID: \"e5c85e5b-ab19-414d-97e6-767b9e01f731\") " pod="openshift-nmstate/nmstate-operator-646758c888-frs8s" Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.411275 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-827bf\" (UniqueName: \"kubernetes.io/projected/e5c85e5b-ab19-414d-97e6-767b9e01f731-kube-api-access-827bf\") pod \"nmstate-operator-646758c888-frs8s\" (UID: \"e5c85e5b-ab19-414d-97e6-767b9e01f731\") " pod="openshift-nmstate/nmstate-operator-646758c888-frs8s" Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.553531 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-frs8s" Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.800653 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-frs8s"] Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.823288 5010 generic.go:334] "Generic (PLEG): container finished" podID="aae42090-f4be-43c8-b0b1-90fe576195a3" containerID="5c382ebad5e62922e5ab93ec93d495f5875cfe47f60ced4a82342b11f3962e8d" exitCode=0 Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.823364 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk2xz" event={"ID":"aae42090-f4be-43c8-b0b1-90fe576195a3","Type":"ContainerDied","Data":"5c382ebad5e62922e5ab93ec93d495f5875cfe47f60ced4a82342b11f3962e8d"} Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.824358 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-frs8s" event={"ID":"e5c85e5b-ab19-414d-97e6-767b9e01f731","Type":"ContainerStarted","Data":"5908b98cc9c4e8e06b25a0ee20e6cc49102e6a6e209fbb852ae959a901689b23"} Feb 03 10:17:50 crc kubenswrapper[5010]: I0203 10:17:50.835988 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk2xz" event={"ID":"aae42090-f4be-43c8-b0b1-90fe576195a3","Type":"ContainerStarted","Data":"646c66b8f94cfde5c6d8883c2c7e71e6bb79c1b3b31a40c92dea00ebb09f1769"} Feb 03 10:17:51 crc kubenswrapper[5010]: I0203 10:17:51.733784 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:51 crc kubenswrapper[5010]: I0203 10:17:51.734453 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:51 crc kubenswrapper[5010]: I0203 10:17:51.868968 5010 generic.go:334] "Generic (PLEG): container finished" podID="aae42090-f4be-43c8-b0b1-90fe576195a3" containerID="646c66b8f94cfde5c6d8883c2c7e71e6bb79c1b3b31a40c92dea00ebb09f1769" exitCode=0 Feb 03 10:17:51 crc kubenswrapper[5010]: I0203 10:17:51.869042 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk2xz" event={"ID":"aae42090-f4be-43c8-b0b1-90fe576195a3","Type":"ContainerDied","Data":"646c66b8f94cfde5c6d8883c2c7e71e6bb79c1b3b31a40c92dea00ebb09f1769"} Feb 03 10:17:52 crc kubenswrapper[5010]: I0203 10:17:52.855379 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jw95h" podUID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" containerName="registry-server" probeResult="failure" output=< Feb 03 10:17:52 crc kubenswrapper[5010]: timeout: failed to connect service ":50051" within 1s Feb 03 10:17:52 crc kubenswrapper[5010]: > Feb 03 10:17:52 crc kubenswrapper[5010]: I0203 10:17:52.878189 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk2xz" event={"ID":"aae42090-f4be-43c8-b0b1-90fe576195a3","Type":"ContainerStarted","Data":"42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5"} Feb 03 10:17:52 crc kubenswrapper[5010]: I0203 10:17:52.880051 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-frs8s" event={"ID":"e5c85e5b-ab19-414d-97e6-767b9e01f731","Type":"ContainerStarted","Data":"231f510af2241efaa85d823418b2221940ce2782889b8739d680d24932992e4c"} Feb 03 10:17:52 crc kubenswrapper[5010]: I0203 10:17:52.907896 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dk2xz" podStartSLOduration=3.317217381 podStartE2EDuration="6.907878953s" podCreationTimestamp="2026-02-03 10:17:46 +0000 UTC" firstStartedPulling="2026-02-03 10:17:48.825839113 +0000 UTC m=+938.981815242" lastFinishedPulling="2026-02-03 10:17:52.416500685 +0000 UTC m=+942.572476814" observedRunningTime="2026-02-03 10:17:52.901859588 +0000 UTC m=+943.057835717" watchObservedRunningTime="2026-02-03 10:17:52.907878953 +0000 UTC m=+943.063855082" Feb 03 10:17:52 crc kubenswrapper[5010]: I0203 10:17:52.922633 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-frs8s" podStartSLOduration=1.830706026 podStartE2EDuration="4.922613221s" podCreationTimestamp="2026-02-03 10:17:48 +0000 UTC" firstStartedPulling="2026-02-03 10:17:48.818935845 +0000 UTC m=+938.974911984" lastFinishedPulling="2026-02-03 10:17:51.91084305 +0000 UTC m=+942.066819179" observedRunningTime="2026-02-03 10:17:52.919463871 +0000 UTC m=+943.075440020" watchObservedRunningTime="2026-02-03 10:17:52.922613221 +0000 UTC m=+943.078589350" Feb 03 10:17:53 crc kubenswrapper[5010]: I0203 10:17:53.938396 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-hl7ls"] Feb 03 10:17:53 crc kubenswrapper[5010]: I0203 10:17:53.939416 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-hl7ls" Feb 03 10:17:53 crc kubenswrapper[5010]: I0203 10:17:53.945585 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-h8tpr" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.018375 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-hl7ls"] Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.115021 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skncx\" (UniqueName: \"kubernetes.io/projected/552fa369-352c-4690-aa39-f0364021feae-kube-api-access-skncx\") pod \"nmstate-metrics-54757c584b-hl7ls\" (UID: \"552fa369-352c-4690-aa39-f0364021feae\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-hl7ls" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.167033 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-55jg2"] Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.168076 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.189604 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6"] Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.190285 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.193712 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.327241 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skncx\" (UniqueName: \"kubernetes.io/projected/552fa369-352c-4690-aa39-f0364021feae-kube-api-access-skncx\") pod \"nmstate-metrics-54757c584b-hl7ls\" (UID: \"552fa369-352c-4690-aa39-f0364021feae\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-hl7ls" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.336071 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6"] Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.428849 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72ppl\" (UniqueName: \"kubernetes.io/projected/1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a-kube-api-access-72ppl\") pod \"nmstate-webhook-8474b5b9d8-2xtg6\" (UID: \"1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.428899 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-2xtg6\" (UID: \"1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.429003 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d47b696a-a1d0-4389-a099-7f375ab72f8c-dbus-socket\") pod \"nmstate-handler-55jg2\" (UID: \"d47b696a-a1d0-4389-a099-7f375ab72f8c\") " pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.429065 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22zdt\" (UniqueName: \"kubernetes.io/projected/d47b696a-a1d0-4389-a099-7f375ab72f8c-kube-api-access-22zdt\") pod \"nmstate-handler-55jg2\" (UID: \"d47b696a-a1d0-4389-a099-7f375ab72f8c\") " pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.429129 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d47b696a-a1d0-4389-a099-7f375ab72f8c-ovs-socket\") pod \"nmstate-handler-55jg2\" (UID: \"d47b696a-a1d0-4389-a099-7f375ab72f8c\") " pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.429179 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d47b696a-a1d0-4389-a099-7f375ab72f8c-nmstate-lock\") pod \"nmstate-handler-55jg2\" (UID: \"d47b696a-a1d0-4389-a099-7f375ab72f8c\") " pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.530474 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d47b696a-a1d0-4389-a099-7f375ab72f8c-dbus-socket\") pod \"nmstate-handler-55jg2\" (UID: \"d47b696a-a1d0-4389-a099-7f375ab72f8c\") " pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.530594 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22zdt\" (UniqueName: \"kubernetes.io/projected/d47b696a-a1d0-4389-a099-7f375ab72f8c-kube-api-access-22zdt\") pod \"nmstate-handler-55jg2\" (UID: \"d47b696a-a1d0-4389-a099-7f375ab72f8c\") " pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.530886 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d47b696a-a1d0-4389-a099-7f375ab72f8c-dbus-socket\") pod \"nmstate-handler-55jg2\" (UID: \"d47b696a-a1d0-4389-a099-7f375ab72f8c\") " pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.530997 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d47b696a-a1d0-4389-a099-7f375ab72f8c-ovs-socket\") pod \"nmstate-handler-55jg2\" (UID: \"d47b696a-a1d0-4389-a099-7f375ab72f8c\") " pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.531085 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d47b696a-a1d0-4389-a099-7f375ab72f8c-nmstate-lock\") pod \"nmstate-handler-55jg2\" (UID: \"d47b696a-a1d0-4389-a099-7f375ab72f8c\") " pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.531157 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-2xtg6\" (UID: \"1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.531175 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72ppl\" (UniqueName: \"kubernetes.io/projected/1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a-kube-api-access-72ppl\") pod \"nmstate-webhook-8474b5b9d8-2xtg6\" (UID: \"1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.531416 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d47b696a-a1d0-4389-a099-7f375ab72f8c-ovs-socket\") pod \"nmstate-handler-55jg2\" (UID: \"d47b696a-a1d0-4389-a099-7f375ab72f8c\") " pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: E0203 10:17:54.531434 5010 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.531456 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d47b696a-a1d0-4389-a099-7f375ab72f8c-nmstate-lock\") pod \"nmstate-handler-55jg2\" (UID: \"d47b696a-a1d0-4389-a099-7f375ab72f8c\") " pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: E0203 10:17:54.531503 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a-tls-key-pair podName:1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a nodeName:}" failed. No retries permitted until 2026-02-03 10:17:55.031476455 +0000 UTC m=+945.187452584 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-2xtg6" (UID: "1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a") : secret "openshift-nmstate-webhook" not found Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.629993 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skncx\" (UniqueName: \"kubernetes.io/projected/552fa369-352c-4690-aa39-f0364021feae-kube-api-access-skncx\") pod \"nmstate-metrics-54757c584b-hl7ls\" (UID: \"552fa369-352c-4690-aa39-f0364021feae\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-hl7ls" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.634925 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22zdt\" (UniqueName: \"kubernetes.io/projected/d47b696a-a1d0-4389-a099-7f375ab72f8c-kube-api-access-22zdt\") pod \"nmstate-handler-55jg2\" (UID: \"d47b696a-a1d0-4389-a099-7f375ab72f8c\") " pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.635450 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72ppl\" (UniqueName: \"kubernetes.io/projected/1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a-kube-api-access-72ppl\") pod \"nmstate-webhook-8474b5b9d8-2xtg6\" (UID: \"1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.639193 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-hl7ls" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.705433 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg"] Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.706548 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.711465 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-hgx6j" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.719346 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.719654 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.724737 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg"] Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.736247 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4tgz\" (UniqueName: \"kubernetes.io/projected/a09e0456-1529-4ece-9266-d02a283d6bd1-kube-api-access-l4tgz\") pod \"nmstate-console-plugin-7754f76f8b-npjjg\" (UID: \"a09e0456-1529-4ece-9266-d02a283d6bd1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.736309 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a09e0456-1529-4ece-9266-d02a283d6bd1-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-npjjg\" (UID: \"a09e0456-1529-4ece-9266-d02a283d6bd1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.736446 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a09e0456-1529-4ece-9266-d02a283d6bd1-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-npjjg\" (UID: \"a09e0456-1529-4ece-9266-d02a283d6bd1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.793257 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: W0203 10:17:54.825019 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd47b696a_a1d0_4389_a099_7f375ab72f8c.slice/crio-0018ace989cd238398395805035c6036e6d60f23cd14e853f7e6eed50bcba7d7 WatchSource:0}: Error finding container 0018ace989cd238398395805035c6036e6d60f23cd14e853f7e6eed50bcba7d7: Status 404 returned error can't find the container with id 0018ace989cd238398395805035c6036e6d60f23cd14e853f7e6eed50bcba7d7 Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.837273 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4tgz\" (UniqueName: \"kubernetes.io/projected/a09e0456-1529-4ece-9266-d02a283d6bd1-kube-api-access-l4tgz\") pod \"nmstate-console-plugin-7754f76f8b-npjjg\" (UID: \"a09e0456-1529-4ece-9266-d02a283d6bd1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.837314 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a09e0456-1529-4ece-9266-d02a283d6bd1-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-npjjg\" (UID: \"a09e0456-1529-4ece-9266-d02a283d6bd1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.837373 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a09e0456-1529-4ece-9266-d02a283d6bd1-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-npjjg\" (UID: \"a09e0456-1529-4ece-9266-d02a283d6bd1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" Feb 03 10:17:54 crc kubenswrapper[5010]: E0203 10:17:54.837558 5010 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 03 10:17:54 crc kubenswrapper[5010]: E0203 10:17:54.837618 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a09e0456-1529-4ece-9266-d02a283d6bd1-plugin-serving-cert podName:a09e0456-1529-4ece-9266-d02a283d6bd1 nodeName:}" failed. No retries permitted until 2026-02-03 10:17:55.337599856 +0000 UTC m=+945.493575995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/a09e0456-1529-4ece-9266-d02a283d6bd1-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-npjjg" (UID: "a09e0456-1529-4ece-9266-d02a283d6bd1") : secret "plugin-serving-cert" not found Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.838320 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a09e0456-1529-4ece-9266-d02a283d6bd1-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-npjjg\" (UID: \"a09e0456-1529-4ece-9266-d02a283d6bd1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.856916 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4tgz\" (UniqueName: \"kubernetes.io/projected/a09e0456-1529-4ece-9266-d02a283d6bd1-kube-api-access-l4tgz\") pod \"nmstate-console-plugin-7754f76f8b-npjjg\" (UID: \"a09e0456-1529-4ece-9266-d02a283d6bd1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.890298 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-55jg2" event={"ID":"d47b696a-a1d0-4389-a099-7f375ab72f8c","Type":"ContainerStarted","Data":"0018ace989cd238398395805035c6036e6d60f23cd14e853f7e6eed50bcba7d7"} Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.905027 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-85556757c-xgtrl"] Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.905920 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.946645 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7c407954-b971-4641-b466-882aecfa452d-oauth-serving-cert\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.946740 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7c407954-b971-4641-b466-882aecfa452d-console-oauth-config\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.946808 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c407954-b971-4641-b466-882aecfa452d-console-serving-cert\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.946826 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm8gd\" (UniqueName: \"kubernetes.io/projected/7c407954-b971-4641-b466-882aecfa452d-kube-api-access-zm8gd\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.946962 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c407954-b971-4641-b466-882aecfa452d-service-ca\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.947010 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c407954-b971-4641-b466-882aecfa452d-trusted-ca-bundle\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.947074 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7c407954-b971-4641-b466-882aecfa452d-console-config\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.948471 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-85556757c-xgtrl"] Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.048449 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c407954-b971-4641-b466-882aecfa452d-service-ca\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.048515 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c407954-b971-4641-b466-882aecfa452d-trusted-ca-bundle\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.048559 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7c407954-b971-4641-b466-882aecfa452d-console-config\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.048597 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7c407954-b971-4641-b466-882aecfa452d-oauth-serving-cert\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.048633 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7c407954-b971-4641-b466-882aecfa452d-console-oauth-config\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.048694 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c407954-b971-4641-b466-882aecfa452d-console-serving-cert\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.048716 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm8gd\" (UniqueName: \"kubernetes.io/projected/7c407954-b971-4641-b466-882aecfa452d-kube-api-access-zm8gd\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.048757 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-2xtg6\" (UID: \"1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.050502 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c407954-b971-4641-b466-882aecfa452d-trusted-ca-bundle\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.050624 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7c407954-b971-4641-b466-882aecfa452d-oauth-serving-cert\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.050630 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7c407954-b971-4641-b466-882aecfa452d-console-config\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.050912 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c407954-b971-4641-b466-882aecfa452d-service-ca\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.053898 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c407954-b971-4641-b466-882aecfa452d-console-serving-cert\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.053916 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7c407954-b971-4641-b466-882aecfa452d-console-oauth-config\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.054538 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-2xtg6\" (UID: \"1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.067969 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm8gd\" (UniqueName: \"kubernetes.io/projected/7c407954-b971-4641-b466-882aecfa452d-kube-api-access-zm8gd\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.237895 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.274663 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.354594 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a09e0456-1529-4ece-9266-d02a283d6bd1-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-npjjg\" (UID: \"a09e0456-1529-4ece-9266-d02a283d6bd1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.359513 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a09e0456-1529-4ece-9266-d02a283d6bd1-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-npjjg\" (UID: \"a09e0456-1529-4ece-9266-d02a283d6bd1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.565878 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.745677 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-hl7ls"] Feb 03 10:17:55 crc kubenswrapper[5010]: W0203 10:17:55.787400 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod552fa369_352c_4690_aa39_f0364021feae.slice/crio-279091e0bf5fa5d7da037200c2d0b459b254335a9a3782229b5ef8f286367044 WatchSource:0}: Error finding container 279091e0bf5fa5d7da037200c2d0b459b254335a9a3782229b5ef8f286367044: Status 404 returned error can't find the container with id 279091e0bf5fa5d7da037200c2d0b459b254335a9a3782229b5ef8f286367044 Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.960715 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-hl7ls" event={"ID":"552fa369-352c-4690-aa39-f0364021feae","Type":"ContainerStarted","Data":"279091e0bf5fa5d7da037200c2d0b459b254335a9a3782229b5ef8f286367044"} Feb 03 10:17:56 crc kubenswrapper[5010]: I0203 10:17:56.006320 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-85556757c-xgtrl"] Feb 03 10:17:56 crc kubenswrapper[5010]: I0203 10:17:56.111402 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg"] Feb 03 10:17:56 crc kubenswrapper[5010]: W0203 10:17:56.124617 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda09e0456_1529_4ece_9266_d02a283d6bd1.slice/crio-e7acd10e9541fdf5180baea8b3e92f4170102e8831c35308def7d9c0999d2c81 WatchSource:0}: Error finding container e7acd10e9541fdf5180baea8b3e92f4170102e8831c35308def7d9c0999d2c81: Status 404 returned error can't find the container with id e7acd10e9541fdf5180baea8b3e92f4170102e8831c35308def7d9c0999d2c81 Feb 03 10:17:56 crc kubenswrapper[5010]: I0203 10:17:56.283260 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6"] Feb 03 10:17:56 crc kubenswrapper[5010]: W0203 10:17:56.289560 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1336bbfa_f4c5_4e35_9b48_d0e8df8f3e7a.slice/crio-3ff4cb229308fbcddb94c59da343ba1bb478794881e8a1acafbe1c8a840438bc WatchSource:0}: Error finding container 3ff4cb229308fbcddb94c59da343ba1bb478794881e8a1acafbe1c8a840438bc: Status 404 returned error can't find the container with id 3ff4cb229308fbcddb94c59da343ba1bb478794881e8a1acafbe1c8a840438bc Feb 03 10:17:56 crc kubenswrapper[5010]: I0203 10:17:56.968100 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-85556757c-xgtrl" event={"ID":"7c407954-b971-4641-b466-882aecfa452d","Type":"ContainerStarted","Data":"7ef5324afbb31210395ef76208265ecaecefc136478d6f66e32869e8c859cd89"} Feb 03 10:17:56 crc kubenswrapper[5010]: I0203 10:17:56.968284 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-85556757c-xgtrl" event={"ID":"7c407954-b971-4641-b466-882aecfa452d","Type":"ContainerStarted","Data":"934148e529d4479274f5172ee1c039b370951c343ba6b0480f971775fe9fa002"} Feb 03 10:17:56 crc kubenswrapper[5010]: I0203 10:17:56.970093 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" event={"ID":"1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a","Type":"ContainerStarted","Data":"3ff4cb229308fbcddb94c59da343ba1bb478794881e8a1acafbe1c8a840438bc"} Feb 03 10:17:56 crc kubenswrapper[5010]: I0203 10:17:56.971501 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" event={"ID":"a09e0456-1529-4ece-9266-d02a283d6bd1","Type":"ContainerStarted","Data":"e7acd10e9541fdf5180baea8b3e92f4170102e8831c35308def7d9c0999d2c81"} Feb 03 10:17:56 crc kubenswrapper[5010]: I0203 10:17:56.993916 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-85556757c-xgtrl" podStartSLOduration=2.993866775 podStartE2EDuration="2.993866775s" podCreationTimestamp="2026-02-03 10:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:17:56.993400693 +0000 UTC m=+947.149376932" watchObservedRunningTime="2026-02-03 10:17:56.993866775 +0000 UTC m=+947.149842914" Feb 03 10:17:57 crc kubenswrapper[5010]: I0203 10:17:57.136852 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:57 crc kubenswrapper[5010]: I0203 10:17:57.136911 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:57 crc kubenswrapper[5010]: I0203 10:17:57.181144 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:58 crc kubenswrapper[5010]: I0203 10:17:58.054253 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:58 crc kubenswrapper[5010]: I0203 10:17:58.097345 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dk2xz"] Feb 03 10:18:00 crc kubenswrapper[5010]: I0203 10:18:00.005863 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dk2xz" podUID="aae42090-f4be-43c8-b0b1-90fe576195a3" containerName="registry-server" containerID="cri-o://42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5" gracePeriod=2 Feb 03 10:18:00 crc kubenswrapper[5010]: I0203 10:18:00.773070 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:18:00 crc kubenswrapper[5010]: I0203 10:18:00.910358 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae42090-f4be-43c8-b0b1-90fe576195a3-utilities\") pod \"aae42090-f4be-43c8-b0b1-90fe576195a3\" (UID: \"aae42090-f4be-43c8-b0b1-90fe576195a3\") " Feb 03 10:18:00 crc kubenswrapper[5010]: I0203 10:18:00.910698 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjwsm\" (UniqueName: \"kubernetes.io/projected/aae42090-f4be-43c8-b0b1-90fe576195a3-kube-api-access-rjwsm\") pod \"aae42090-f4be-43c8-b0b1-90fe576195a3\" (UID: \"aae42090-f4be-43c8-b0b1-90fe576195a3\") " Feb 03 10:18:00 crc kubenswrapper[5010]: I0203 10:18:00.910739 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae42090-f4be-43c8-b0b1-90fe576195a3-catalog-content\") pod \"aae42090-f4be-43c8-b0b1-90fe576195a3\" (UID: \"aae42090-f4be-43c8-b0b1-90fe576195a3\") " Feb 03 10:18:00 crc kubenswrapper[5010]: I0203 10:18:00.911439 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aae42090-f4be-43c8-b0b1-90fe576195a3-utilities" (OuterVolumeSpecName: "utilities") pod "aae42090-f4be-43c8-b0b1-90fe576195a3" (UID: "aae42090-f4be-43c8-b0b1-90fe576195a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:18:00 crc kubenswrapper[5010]: I0203 10:18:00.922448 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aae42090-f4be-43c8-b0b1-90fe576195a3-kube-api-access-rjwsm" (OuterVolumeSpecName: "kube-api-access-rjwsm") pod "aae42090-f4be-43c8-b0b1-90fe576195a3" (UID: "aae42090-f4be-43c8-b0b1-90fe576195a3"). InnerVolumeSpecName "kube-api-access-rjwsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:18:00 crc kubenswrapper[5010]: I0203 10:18:00.967726 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aae42090-f4be-43c8-b0b1-90fe576195a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aae42090-f4be-43c8-b0b1-90fe576195a3" (UID: "aae42090-f4be-43c8-b0b1-90fe576195a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.012181 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae42090-f4be-43c8-b0b1-90fe576195a3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.012205 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae42090-f4be-43c8-b0b1-90fe576195a3-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.012229 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjwsm\" (UniqueName: \"kubernetes.io/projected/aae42090-f4be-43c8-b0b1-90fe576195a3-kube-api-access-rjwsm\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.020829 5010 generic.go:334] "Generic (PLEG): container finished" podID="aae42090-f4be-43c8-b0b1-90fe576195a3" containerID="42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5" exitCode=0 Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.020879 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk2xz" event={"ID":"aae42090-f4be-43c8-b0b1-90fe576195a3","Type":"ContainerDied","Data":"42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5"} Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.020909 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk2xz" event={"ID":"aae42090-f4be-43c8-b0b1-90fe576195a3","Type":"ContainerDied","Data":"a8bec8e2b56c771c7079c4cac54a1acdfd8e585a247992ddbbfe6031d2222fb8"} Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.020931 5010 scope.go:117] "RemoveContainer" containerID="42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.021086 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.055308 5010 scope.go:117] "RemoveContainer" containerID="646c66b8f94cfde5c6d8883c2c7e71e6bb79c1b3b31a40c92dea00ebb09f1769" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.056306 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dk2xz"] Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.061738 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dk2xz"] Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.099774 5010 scope.go:117] "RemoveContainer" containerID="5c382ebad5e62922e5ab93ec93d495f5875cfe47f60ced4a82342b11f3962e8d" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.120276 5010 scope.go:117] "RemoveContainer" containerID="42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5" Feb 03 10:18:01 crc kubenswrapper[5010]: E0203 10:18:01.123368 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5\": container with ID starting with 42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5 not found: ID does not exist" containerID="42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.123406 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5"} err="failed to get container status \"42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5\": rpc error: code = NotFound desc = could not find container \"42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5\": container with ID starting with 42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5 not found: ID does not exist" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.123432 5010 scope.go:117] "RemoveContainer" containerID="646c66b8f94cfde5c6d8883c2c7e71e6bb79c1b3b31a40c92dea00ebb09f1769" Feb 03 10:18:01 crc kubenswrapper[5010]: E0203 10:18:01.123806 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"646c66b8f94cfde5c6d8883c2c7e71e6bb79c1b3b31a40c92dea00ebb09f1769\": container with ID starting with 646c66b8f94cfde5c6d8883c2c7e71e6bb79c1b3b31a40c92dea00ebb09f1769 not found: ID does not exist" containerID="646c66b8f94cfde5c6d8883c2c7e71e6bb79c1b3b31a40c92dea00ebb09f1769" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.123833 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"646c66b8f94cfde5c6d8883c2c7e71e6bb79c1b3b31a40c92dea00ebb09f1769"} err="failed to get container status \"646c66b8f94cfde5c6d8883c2c7e71e6bb79c1b3b31a40c92dea00ebb09f1769\": rpc error: code = NotFound desc = could not find container \"646c66b8f94cfde5c6d8883c2c7e71e6bb79c1b3b31a40c92dea00ebb09f1769\": container with ID starting with 646c66b8f94cfde5c6d8883c2c7e71e6bb79c1b3b31a40c92dea00ebb09f1769 not found: ID does not exist" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.123850 5010 scope.go:117] "RemoveContainer" containerID="5c382ebad5e62922e5ab93ec93d495f5875cfe47f60ced4a82342b11f3962e8d" Feb 03 10:18:01 crc kubenswrapper[5010]: E0203 10:18:01.125358 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c382ebad5e62922e5ab93ec93d495f5875cfe47f60ced4a82342b11f3962e8d\": container with ID starting with 5c382ebad5e62922e5ab93ec93d495f5875cfe47f60ced4a82342b11f3962e8d not found: ID does not exist" containerID="5c382ebad5e62922e5ab93ec93d495f5875cfe47f60ced4a82342b11f3962e8d" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.125391 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c382ebad5e62922e5ab93ec93d495f5875cfe47f60ced4a82342b11f3962e8d"} err="failed to get container status \"5c382ebad5e62922e5ab93ec93d495f5875cfe47f60ced4a82342b11f3962e8d\": rpc error: code = NotFound desc = could not find container \"5c382ebad5e62922e5ab93ec93d495f5875cfe47f60ced4a82342b11f3962e8d\": container with ID starting with 5c382ebad5e62922e5ab93ec93d495f5875cfe47f60ced4a82342b11f3962e8d not found: ID does not exist" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.799845 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.852947 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:18:02 crc kubenswrapper[5010]: I0203 10:18:02.031188 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" event={"ID":"1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a","Type":"ContainerStarted","Data":"914f343841c6c49951d3f9e532eaff729c8c8a12f3dd90eb117eb0a5db2a5799"} Feb 03 10:18:02 crc kubenswrapper[5010]: I0203 10:18:02.031858 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" Feb 03 10:18:02 crc kubenswrapper[5010]: I0203 10:18:02.035422 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-55jg2" event={"ID":"d47b696a-a1d0-4389-a099-7f375ab72f8c","Type":"ContainerStarted","Data":"bd46edd0bf6b0328b0b416fd6991b88ce38b9657e6b4984ab8015caf312909ad"} Feb 03 10:18:02 crc kubenswrapper[5010]: I0203 10:18:02.035641 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:18:02 crc kubenswrapper[5010]: I0203 10:18:02.037333 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-hl7ls" event={"ID":"552fa369-352c-4690-aa39-f0364021feae","Type":"ContainerStarted","Data":"8eeea6bb6655951282cb8cc5b8e5aa47576a34145ef0e0a35843a13a66dfaef7"} Feb 03 10:18:02 crc kubenswrapper[5010]: I0203 10:18:02.039333 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" event={"ID":"a09e0456-1529-4ece-9266-d02a283d6bd1","Type":"ContainerStarted","Data":"e0199070d116252057e18c698125ba1e46cd9c4a0ceacf81e9ba2c6be88888a7"} Feb 03 10:18:02 crc kubenswrapper[5010]: I0203 10:18:02.051188 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" podStartSLOduration=3.34070146 podStartE2EDuration="8.051169438s" podCreationTimestamp="2026-02-03 10:17:54 +0000 UTC" firstStartedPulling="2026-02-03 10:17:56.292809112 +0000 UTC m=+946.448785251" lastFinishedPulling="2026-02-03 10:18:01.0032771 +0000 UTC m=+951.159253229" observedRunningTime="2026-02-03 10:18:02.049838134 +0000 UTC m=+952.205814293" watchObservedRunningTime="2026-02-03 10:18:02.051169438 +0000 UTC m=+952.207145567" Feb 03 10:18:02 crc kubenswrapper[5010]: I0203 10:18:02.070948 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" podStartSLOduration=3.207698345 podStartE2EDuration="8.070924695s" podCreationTimestamp="2026-02-03 10:17:54 +0000 UTC" firstStartedPulling="2026-02-03 10:17:56.127495827 +0000 UTC m=+946.283471956" lastFinishedPulling="2026-02-03 10:18:00.990722177 +0000 UTC m=+951.146698306" observedRunningTime="2026-02-03 10:18:02.067102047 +0000 UTC m=+952.223078176" watchObservedRunningTime="2026-02-03 10:18:02.070924695 +0000 UTC m=+952.226900824" Feb 03 10:18:02 crc kubenswrapper[5010]: I0203 10:18:02.095274 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-55jg2" podStartSLOduration=1.9212982840000001 podStartE2EDuration="8.09524052s" podCreationTimestamp="2026-02-03 10:17:54 +0000 UTC" firstStartedPulling="2026-02-03 10:17:54.828629895 +0000 UTC m=+944.984606024" lastFinishedPulling="2026-02-03 10:18:01.002572131 +0000 UTC m=+951.158548260" observedRunningTime="2026-02-03 10:18:02.092242933 +0000 UTC m=+952.248219072" watchObservedRunningTime="2026-02-03 10:18:02.09524052 +0000 UTC m=+952.251216649" Feb 03 10:18:02 crc kubenswrapper[5010]: I0203 10:18:02.530623 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aae42090-f4be-43c8-b0b1-90fe576195a3" path="/var/lib/kubelet/pods/aae42090-f4be-43c8-b0b1-90fe576195a3/volumes" Feb 03 10:18:03 crc kubenswrapper[5010]: I0203 10:18:03.004451 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jw95h"] Feb 03 10:18:03 crc kubenswrapper[5010]: I0203 10:18:03.045886 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jw95h" podUID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" containerName="registry-server" containerID="cri-o://c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981" gracePeriod=2 Feb 03 10:18:03 crc kubenswrapper[5010]: I0203 10:18:03.788731 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:18:03 crc kubenswrapper[5010]: I0203 10:18:03.880378 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q58z\" (UniqueName: \"kubernetes.io/projected/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-kube-api-access-4q58z\") pod \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\" (UID: \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\") " Feb 03 10:18:03 crc kubenswrapper[5010]: I0203 10:18:03.880537 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-utilities\") pod \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\" (UID: \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\") " Feb 03 10:18:03 crc kubenswrapper[5010]: I0203 10:18:03.880569 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-catalog-content\") pod \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\" (UID: \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\") " Feb 03 10:18:03 crc kubenswrapper[5010]: I0203 10:18:03.881652 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-utilities" (OuterVolumeSpecName: "utilities") pod "a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" (UID: "a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:18:03 crc kubenswrapper[5010]: I0203 10:18:03.888426 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-kube-api-access-4q58z" (OuterVolumeSpecName: "kube-api-access-4q58z") pod "a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" (UID: "a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96"). InnerVolumeSpecName "kube-api-access-4q58z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:18:03 crc kubenswrapper[5010]: I0203 10:18:03.982384 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4q58z\" (UniqueName: \"kubernetes.io/projected/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-kube-api-access-4q58z\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:03 crc kubenswrapper[5010]: I0203 10:18:03.982759 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:03 crc kubenswrapper[5010]: I0203 10:18:03.999121 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" (UID: "a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.052303 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-hl7ls" event={"ID":"552fa369-352c-4690-aa39-f0364021feae","Type":"ContainerStarted","Data":"6c580a63487f5ce48ebe5fe9ebbd7d8e657990d0e8338ef14f54796ae9c62b21"} Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.053544 5010 generic.go:334] "Generic (PLEG): container finished" podID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" containerID="c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981" exitCode=0 Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.053570 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw95h" event={"ID":"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96","Type":"ContainerDied","Data":"c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981"} Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.053592 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.053621 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw95h" event={"ID":"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96","Type":"ContainerDied","Data":"60580599bfa6e867910c3854625eecb82cba759cc65d13303775a63e7e0ee852"} Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.053642 5010 scope.go:117] "RemoveContainer" containerID="c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.070139 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-hl7ls" podStartSLOduration=3.099355845 podStartE2EDuration="11.070119561s" podCreationTimestamp="2026-02-03 10:17:53 +0000 UTC" firstStartedPulling="2026-02-03 10:17:55.791988282 +0000 UTC m=+945.947964411" lastFinishedPulling="2026-02-03 10:18:03.762751998 +0000 UTC m=+953.918728127" observedRunningTime="2026-02-03 10:18:04.069441103 +0000 UTC m=+954.225417242" watchObservedRunningTime="2026-02-03 10:18:04.070119561 +0000 UTC m=+954.226095700" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.086899 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.094602 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jw95h"] Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.095076 5010 scope.go:117] "RemoveContainer" containerID="3233b7a84639e8da2f401885f649b9998961cd9522c1b313c054b9fc5b07696c" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.100007 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jw95h"] Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.113663 5010 scope.go:117] "RemoveContainer" containerID="443709295bdaac31497a6cc77ad2bcc3071794d791e0635c510f6ba7c30b30a9" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.127412 5010 scope.go:117] "RemoveContainer" containerID="c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981" Feb 03 10:18:04 crc kubenswrapper[5010]: E0203 10:18:04.127815 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981\": container with ID starting with c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981 not found: ID does not exist" containerID="c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.127843 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981"} err="failed to get container status \"c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981\": rpc error: code = NotFound desc = could not find container \"c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981\": container with ID starting with c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981 not found: ID does not exist" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.127863 5010 scope.go:117] "RemoveContainer" containerID="3233b7a84639e8da2f401885f649b9998961cd9522c1b313c054b9fc5b07696c" Feb 03 10:18:04 crc kubenswrapper[5010]: E0203 10:18:04.128208 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3233b7a84639e8da2f401885f649b9998961cd9522c1b313c054b9fc5b07696c\": container with ID starting with 3233b7a84639e8da2f401885f649b9998961cd9522c1b313c054b9fc5b07696c not found: ID does not exist" containerID="3233b7a84639e8da2f401885f649b9998961cd9522c1b313c054b9fc5b07696c" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.128256 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3233b7a84639e8da2f401885f649b9998961cd9522c1b313c054b9fc5b07696c"} err="failed to get container status \"3233b7a84639e8da2f401885f649b9998961cd9522c1b313c054b9fc5b07696c\": rpc error: code = NotFound desc = could not find container \"3233b7a84639e8da2f401885f649b9998961cd9522c1b313c054b9fc5b07696c\": container with ID starting with 3233b7a84639e8da2f401885f649b9998961cd9522c1b313c054b9fc5b07696c not found: ID does not exist" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.128275 5010 scope.go:117] "RemoveContainer" containerID="443709295bdaac31497a6cc77ad2bcc3071794d791e0635c510f6ba7c30b30a9" Feb 03 10:18:04 crc kubenswrapper[5010]: E0203 10:18:04.128635 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"443709295bdaac31497a6cc77ad2bcc3071794d791e0635c510f6ba7c30b30a9\": container with ID starting with 443709295bdaac31497a6cc77ad2bcc3071794d791e0635c510f6ba7c30b30a9 not found: ID does not exist" containerID="443709295bdaac31497a6cc77ad2bcc3071794d791e0635c510f6ba7c30b30a9" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.128705 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"443709295bdaac31497a6cc77ad2bcc3071794d791e0635c510f6ba7c30b30a9"} err="failed to get container status \"443709295bdaac31497a6cc77ad2bcc3071794d791e0635c510f6ba7c30b30a9\": rpc error: code = NotFound desc = could not find container \"443709295bdaac31497a6cc77ad2bcc3071794d791e0635c510f6ba7c30b30a9\": container with ID starting with 443709295bdaac31497a6cc77ad2bcc3071794d791e0635c510f6ba7c30b30a9 not found: ID does not exist" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.509288 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" path="/var/lib/kubelet/pods/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96/volumes" Feb 03 10:18:05 crc kubenswrapper[5010]: I0203 10:18:05.275339 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:18:05 crc kubenswrapper[5010]: I0203 10:18:05.275391 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:18:05 crc kubenswrapper[5010]: I0203 10:18:05.280701 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:18:06 crc kubenswrapper[5010]: I0203 10:18:06.071018 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:18:06 crc kubenswrapper[5010]: I0203 10:18:06.126772 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-wtcpj"] Feb 03 10:18:09 crc kubenswrapper[5010]: I0203 10:18:09.817825 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:18:15 crc kubenswrapper[5010]: I0203 10:18:15.245528 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" Feb 03 10:18:16 crc kubenswrapper[5010]: I0203 10:18:16.389891 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:18:16 crc kubenswrapper[5010]: I0203 10:18:16.390280 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:18:16 crc kubenswrapper[5010]: I0203 10:18:16.390353 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:18:16 crc kubenswrapper[5010]: I0203 10:18:16.391168 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9442102e724f69e1d556f61f5773f0e8e33b6a283cb3f40b3f679d223bc6c1e0"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:18:16 crc kubenswrapper[5010]: I0203 10:18:16.391247 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://9442102e724f69e1d556f61f5773f0e8e33b6a283cb3f40b3f679d223bc6c1e0" gracePeriod=600 Feb 03 10:18:17 crc kubenswrapper[5010]: I0203 10:18:17.124767 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="9442102e724f69e1d556f61f5773f0e8e33b6a283cb3f40b3f679d223bc6c1e0" exitCode=0 Feb 03 10:18:17 crc kubenswrapper[5010]: I0203 10:18:17.124832 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"9442102e724f69e1d556f61f5773f0e8e33b6a283cb3f40b3f679d223bc6c1e0"} Feb 03 10:18:17 crc kubenswrapper[5010]: I0203 10:18:17.125064 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"221f195b125299df734f26b3fd40fd966d81cfff3c339b70c815feda6a5e1f4b"} Feb 03 10:18:17 crc kubenswrapper[5010]: I0203 10:18:17.125083 5010 scope.go:117] "RemoveContainer" containerID="8680190c062bea3a65ab9dd9a4d956ebc68c414b2e8a2f0c41a9c5b1c0cfad9d" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.709479 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz"] Feb 03 10:18:27 crc kubenswrapper[5010]: E0203 10:18:27.710184 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" containerName="registry-server" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.710199 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" containerName="registry-server" Feb 03 10:18:27 crc kubenswrapper[5010]: E0203 10:18:27.710211 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae42090-f4be-43c8-b0b1-90fe576195a3" containerName="extract-content" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.710272 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae42090-f4be-43c8-b0b1-90fe576195a3" containerName="extract-content" Feb 03 10:18:27 crc kubenswrapper[5010]: E0203 10:18:27.710286 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" containerName="extract-content" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.710295 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" containerName="extract-content" Feb 03 10:18:27 crc kubenswrapper[5010]: E0203 10:18:27.710317 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" containerName="extract-utilities" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.710325 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" containerName="extract-utilities" Feb 03 10:18:27 crc kubenswrapper[5010]: E0203 10:18:27.710334 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae42090-f4be-43c8-b0b1-90fe576195a3" containerName="extract-utilities" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.710341 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae42090-f4be-43c8-b0b1-90fe576195a3" containerName="extract-utilities" Feb 03 10:18:27 crc kubenswrapper[5010]: E0203 10:18:27.710354 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae42090-f4be-43c8-b0b1-90fe576195a3" containerName="registry-server" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.710361 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae42090-f4be-43c8-b0b1-90fe576195a3" containerName="registry-server" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.710486 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" containerName="registry-server" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.710501 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="aae42090-f4be-43c8-b0b1-90fe576195a3" containerName="registry-server" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.711375 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.713598 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.718801 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz"] Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.815165 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwctl\" (UniqueName: \"kubernetes.io/projected/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-kube-api-access-hwctl\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz\" (UID: \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.815291 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz\" (UID: \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.815314 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz\" (UID: \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.916973 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwctl\" (UniqueName: \"kubernetes.io/projected/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-kube-api-access-hwctl\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz\" (UID: \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.917046 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz\" (UID: \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.917072 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz\" (UID: \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.917745 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz\" (UID: \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.918418 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz\" (UID: \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.940857 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwctl\" (UniqueName: \"kubernetes.io/projected/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-kube-api-access-hwctl\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz\" (UID: \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:28 crc kubenswrapper[5010]: I0203 10:18:28.031036 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:28 crc kubenswrapper[5010]: I0203 10:18:28.421278 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz"] Feb 03 10:18:29 crc kubenswrapper[5010]: I0203 10:18:29.207088 5010 generic.go:334] "Generic (PLEG): container finished" podID="bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" containerID="a3cd82fc92cf61c5f18a09e764f5dd61187286d6b948cfb9d63c617df319c44e" exitCode=0 Feb 03 10:18:29 crc kubenswrapper[5010]: I0203 10:18:29.207149 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" event={"ID":"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119","Type":"ContainerDied","Data":"a3cd82fc92cf61c5f18a09e764f5dd61187286d6b948cfb9d63c617df319c44e"} Feb 03 10:18:29 crc kubenswrapper[5010]: I0203 10:18:29.207181 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" event={"ID":"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119","Type":"ContainerStarted","Data":"14bed4434d2304991aed20b7bafe268c89811d3d8bf20fc4eded5ec1946a7807"} Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.166762 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-wtcpj" podUID="61f7221f-b9e1-45bc-8a9e-2f512c9e457d" containerName="console" containerID="cri-o://f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa" gracePeriod=15 Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.226600 5010 generic.go:334] "Generic (PLEG): container finished" podID="bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" containerID="762723c8c7f4f28f6095a48162888e71c936fac571db0915653fe6246dcf24e0" exitCode=0 Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.226651 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" event={"ID":"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119","Type":"ContainerDied","Data":"762723c8c7f4f28f6095a48162888e71c936fac571db0915653fe6246dcf24e0"} Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.602183 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-wtcpj_61f7221f-b9e1-45bc-8a9e-2f512c9e457d/console/0.log" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.602279 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.772042 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-config\") pod \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.773019 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-config" (OuterVolumeSpecName: "console-config") pod "61f7221f-b9e1-45bc-8a9e-2f512c9e457d" (UID: "61f7221f-b9e1-45bc-8a9e-2f512c9e457d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.773471 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-oauth-serving-cert\") pod \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.773549 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwvg\" (UniqueName: \"kubernetes.io/projected/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-kube-api-access-kfwvg\") pod \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.773599 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-oauth-config\") pod \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.773642 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-serving-cert\") pod \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.773725 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-trusted-ca-bundle\") pod \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.773765 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-service-ca\") pod \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.774418 5010 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.774441 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "61f7221f-b9e1-45bc-8a9e-2f512c9e457d" (UID: "61f7221f-b9e1-45bc-8a9e-2f512c9e457d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.774631 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "61f7221f-b9e1-45bc-8a9e-2f512c9e457d" (UID: "61f7221f-b9e1-45bc-8a9e-2f512c9e457d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.774854 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-service-ca" (OuterVolumeSpecName: "service-ca") pod "61f7221f-b9e1-45bc-8a9e-2f512c9e457d" (UID: "61f7221f-b9e1-45bc-8a9e-2f512c9e457d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.779031 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-kube-api-access-kfwvg" (OuterVolumeSpecName: "kube-api-access-kfwvg") pod "61f7221f-b9e1-45bc-8a9e-2f512c9e457d" (UID: "61f7221f-b9e1-45bc-8a9e-2f512c9e457d"). InnerVolumeSpecName "kube-api-access-kfwvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.779909 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "61f7221f-b9e1-45bc-8a9e-2f512c9e457d" (UID: "61f7221f-b9e1-45bc-8a9e-2f512c9e457d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.784255 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "61f7221f-b9e1-45bc-8a9e-2f512c9e457d" (UID: "61f7221f-b9e1-45bc-8a9e-2f512c9e457d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.875368 5010 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.875400 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwvg\" (UniqueName: \"kubernetes.io/projected/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-kube-api-access-kfwvg\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.875410 5010 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.875420 5010 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.875429 5010 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.875439 5010 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.235093 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-wtcpj_61f7221f-b9e1-45bc-8a9e-2f512c9e457d/console/0.log" Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.235187 5010 generic.go:334] "Generic (PLEG): container finished" podID="61f7221f-b9e1-45bc-8a9e-2f512c9e457d" containerID="f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa" exitCode=2 Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.235299 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.235326 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wtcpj" event={"ID":"61f7221f-b9e1-45bc-8a9e-2f512c9e457d","Type":"ContainerDied","Data":"f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa"} Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.235408 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wtcpj" event={"ID":"61f7221f-b9e1-45bc-8a9e-2f512c9e457d","Type":"ContainerDied","Data":"e28ff007b543d7700a90a71c76b34e3da1bf25749689935b2de9d5cc48606a37"} Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.235440 5010 scope.go:117] "RemoveContainer" containerID="f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa" Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.239887 5010 generic.go:334] "Generic (PLEG): container finished" podID="bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" containerID="7da7195b6792681ec21c8254b8f2e079622d47ffe69d268a6a9e6c70dbadbff6" exitCode=0 Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.239942 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" event={"ID":"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119","Type":"ContainerDied","Data":"7da7195b6792681ec21c8254b8f2e079622d47ffe69d268a6a9e6c70dbadbff6"} Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.256045 5010 scope.go:117] "RemoveContainer" containerID="f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa" Feb 03 10:18:32 crc kubenswrapper[5010]: E0203 10:18:32.256503 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa\": container with ID starting with f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa not found: ID does not exist" containerID="f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa" Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.256572 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa"} err="failed to get container status \"f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa\": rpc error: code = NotFound desc = could not find container \"f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa\": container with ID starting with f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa not found: ID does not exist" Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.274965 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-wtcpj"] Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.280105 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-wtcpj"] Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.523483 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61f7221f-b9e1-45bc-8a9e-2f512c9e457d" path="/var/lib/kubelet/pods/61f7221f-b9e1-45bc-8a9e-2f512c9e457d/volumes" Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.580067 5010 patch_prober.go:28] interesting pod/console-f9d7485db-wtcpj container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.580184 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-wtcpj" podUID="61f7221f-b9e1-45bc-8a9e-2f512c9e457d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 10:18:33 crc kubenswrapper[5010]: I0203 10:18:33.490377 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:33 crc kubenswrapper[5010]: I0203 10:18:33.595949 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-bundle\") pod \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\" (UID: \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\") " Feb 03 10:18:33 crc kubenswrapper[5010]: I0203 10:18:33.596679 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-util\") pod \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\" (UID: \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\") " Feb 03 10:18:33 crc kubenswrapper[5010]: I0203 10:18:33.596740 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwctl\" (UniqueName: \"kubernetes.io/projected/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-kube-api-access-hwctl\") pod \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\" (UID: \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\") " Feb 03 10:18:33 crc kubenswrapper[5010]: I0203 10:18:33.598743 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-bundle" (OuterVolumeSpecName: "bundle") pod "bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" (UID: "bad8c1c1-8f3a-45e1-a3c4-fa197d93d119"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:18:33 crc kubenswrapper[5010]: I0203 10:18:33.600587 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-kube-api-access-hwctl" (OuterVolumeSpecName: "kube-api-access-hwctl") pod "bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" (UID: "bad8c1c1-8f3a-45e1-a3c4-fa197d93d119"). InnerVolumeSpecName "kube-api-access-hwctl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:18:33 crc kubenswrapper[5010]: I0203 10:18:33.612142 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-util" (OuterVolumeSpecName: "util") pod "bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" (UID: "bad8c1c1-8f3a-45e1-a3c4-fa197d93d119"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:18:33 crc kubenswrapper[5010]: I0203 10:18:33.697845 5010 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-util\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:33 crc kubenswrapper[5010]: I0203 10:18:33.697876 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwctl\" (UniqueName: \"kubernetes.io/projected/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-kube-api-access-hwctl\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:33 crc kubenswrapper[5010]: I0203 10:18:33.697890 5010 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:34 crc kubenswrapper[5010]: I0203 10:18:34.254930 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" event={"ID":"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119","Type":"ContainerDied","Data":"14bed4434d2304991aed20b7bafe268c89811d3d8bf20fc4eded5ec1946a7807"} Feb 03 10:18:34 crc kubenswrapper[5010]: I0203 10:18:34.254966 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14bed4434d2304991aed20b7bafe268c89811d3d8bf20fc4eded5ec1946a7807" Feb 03 10:18:34 crc kubenswrapper[5010]: I0203 10:18:34.255030 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.468820 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc"] Feb 03 10:18:43 crc kubenswrapper[5010]: E0203 10:18:43.469670 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" containerName="extract" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.469690 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" containerName="extract" Feb 03 10:18:43 crc kubenswrapper[5010]: E0203 10:18:43.469724 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f7221f-b9e1-45bc-8a9e-2f512c9e457d" containerName="console" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.469732 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f7221f-b9e1-45bc-8a9e-2f512c9e457d" containerName="console" Feb 03 10:18:43 crc kubenswrapper[5010]: E0203 10:18:43.469745 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" containerName="pull" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.469752 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" containerName="pull" Feb 03 10:18:43 crc kubenswrapper[5010]: E0203 10:18:43.469765 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" containerName="util" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.469771 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" containerName="util" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.469894 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="61f7221f-b9e1-45bc-8a9e-2f512c9e457d" containerName="console" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.469918 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" containerName="extract" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.470399 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.475421 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.477470 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.477692 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-9sxcg" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.477778 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.489324 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.626912 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ec28393-ea76-4413-a903-612126368291-apiservice-cert\") pod \"metallb-operator-controller-manager-76d7f7cd57-dncnc\" (UID: \"5ec28393-ea76-4413-a903-612126368291\") " pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.627007 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgqdx\" (UniqueName: \"kubernetes.io/projected/5ec28393-ea76-4413-a903-612126368291-kube-api-access-bgqdx\") pod \"metallb-operator-controller-manager-76d7f7cd57-dncnc\" (UID: \"5ec28393-ea76-4413-a903-612126368291\") " pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.627055 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ec28393-ea76-4413-a903-612126368291-webhook-cert\") pod \"metallb-operator-controller-manager-76d7f7cd57-dncnc\" (UID: \"5ec28393-ea76-4413-a903-612126368291\") " pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.728783 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgqdx\" (UniqueName: \"kubernetes.io/projected/5ec28393-ea76-4413-a903-612126368291-kube-api-access-bgqdx\") pod \"metallb-operator-controller-manager-76d7f7cd57-dncnc\" (UID: \"5ec28393-ea76-4413-a903-612126368291\") " pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.729487 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ec28393-ea76-4413-a903-612126368291-webhook-cert\") pod \"metallb-operator-controller-manager-76d7f7cd57-dncnc\" (UID: \"5ec28393-ea76-4413-a903-612126368291\") " pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.729574 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ec28393-ea76-4413-a903-612126368291-apiservice-cert\") pod \"metallb-operator-controller-manager-76d7f7cd57-dncnc\" (UID: \"5ec28393-ea76-4413-a903-612126368291\") " pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.734912 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ec28393-ea76-4413-a903-612126368291-apiservice-cert\") pod \"metallb-operator-controller-manager-76d7f7cd57-dncnc\" (UID: \"5ec28393-ea76-4413-a903-612126368291\") " pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.745128 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ec28393-ea76-4413-a903-612126368291-webhook-cert\") pod \"metallb-operator-controller-manager-76d7f7cd57-dncnc\" (UID: \"5ec28393-ea76-4413-a903-612126368291\") " pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.765060 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc"] Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.792171 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgqdx\" (UniqueName: \"kubernetes.io/projected/5ec28393-ea76-4413-a903-612126368291-kube-api-access-bgqdx\") pod \"metallb-operator-controller-manager-76d7f7cd57-dncnc\" (UID: \"5ec28393-ea76-4413-a903-612126368291\") " pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.071173 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l"] Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.072469 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.075387 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.075941 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.079470 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-jxsgn" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.090573 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.092323 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l"] Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.274641 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d90f33c9-1c81-4b74-a905-71aed9ecf222-apiservice-cert\") pod \"metallb-operator-webhook-server-5b857c8d44-88x9l\" (UID: \"d90f33c9-1c81-4b74-a905-71aed9ecf222\") " pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.274990 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d90f33c9-1c81-4b74-a905-71aed9ecf222-webhook-cert\") pod \"metallb-operator-webhook-server-5b857c8d44-88x9l\" (UID: \"d90f33c9-1c81-4b74-a905-71aed9ecf222\") " pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.275051 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd8v9\" (UniqueName: \"kubernetes.io/projected/d90f33c9-1c81-4b74-a905-71aed9ecf222-kube-api-access-bd8v9\") pod \"metallb-operator-webhook-server-5b857c8d44-88x9l\" (UID: \"d90f33c9-1c81-4b74-a905-71aed9ecf222\") " pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.375985 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d90f33c9-1c81-4b74-a905-71aed9ecf222-apiservice-cert\") pod \"metallb-operator-webhook-server-5b857c8d44-88x9l\" (UID: \"d90f33c9-1c81-4b74-a905-71aed9ecf222\") " pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.376049 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d90f33c9-1c81-4b74-a905-71aed9ecf222-webhook-cert\") pod \"metallb-operator-webhook-server-5b857c8d44-88x9l\" (UID: \"d90f33c9-1c81-4b74-a905-71aed9ecf222\") " pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.376097 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd8v9\" (UniqueName: \"kubernetes.io/projected/d90f33c9-1c81-4b74-a905-71aed9ecf222-kube-api-access-bd8v9\") pod \"metallb-operator-webhook-server-5b857c8d44-88x9l\" (UID: \"d90f33c9-1c81-4b74-a905-71aed9ecf222\") " pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.385092 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d90f33c9-1c81-4b74-a905-71aed9ecf222-webhook-cert\") pod \"metallb-operator-webhook-server-5b857c8d44-88x9l\" (UID: \"d90f33c9-1c81-4b74-a905-71aed9ecf222\") " pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.397627 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d90f33c9-1c81-4b74-a905-71aed9ecf222-apiservice-cert\") pod \"metallb-operator-webhook-server-5b857c8d44-88x9l\" (UID: \"d90f33c9-1c81-4b74-a905-71aed9ecf222\") " pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.403971 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd8v9\" (UniqueName: \"kubernetes.io/projected/d90f33c9-1c81-4b74-a905-71aed9ecf222-kube-api-access-bd8v9\") pod \"metallb-operator-webhook-server-5b857c8d44-88x9l\" (UID: \"d90f33c9-1c81-4b74-a905-71aed9ecf222\") " pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.655457 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc"] Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.688199 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.945793 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l"] Feb 03 10:18:44 crc kubenswrapper[5010]: W0203 10:18:44.951333 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd90f33c9_1c81_4b74_a905_71aed9ecf222.slice/crio-2ce7775edf5a531a3e3b4029ab154de0bbfd3152c770357d92c60d9f1883030d WatchSource:0}: Error finding container 2ce7775edf5a531a3e3b4029ab154de0bbfd3152c770357d92c60d9f1883030d: Status 404 returned error can't find the container with id 2ce7775edf5a531a3e3b4029ab154de0bbfd3152c770357d92c60d9f1883030d Feb 03 10:18:45 crc kubenswrapper[5010]: I0203 10:18:45.333639 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" event={"ID":"d90f33c9-1c81-4b74-a905-71aed9ecf222","Type":"ContainerStarted","Data":"2ce7775edf5a531a3e3b4029ab154de0bbfd3152c770357d92c60d9f1883030d"} Feb 03 10:18:45 crc kubenswrapper[5010]: I0203 10:18:45.334858 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" event={"ID":"5ec28393-ea76-4413-a903-612126368291","Type":"ContainerStarted","Data":"d19d2b325111314fc861c760f2b9cb42288c25df075dbc6b00ae442830b75f6f"} Feb 03 10:18:48 crc kubenswrapper[5010]: I0203 10:18:48.357887 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" event={"ID":"5ec28393-ea76-4413-a903-612126368291","Type":"ContainerStarted","Data":"137317201a6cf8d3a21d714dc3ffe84540e77add914f23bebf6c6570d6b3191a"} Feb 03 10:18:48 crc kubenswrapper[5010]: I0203 10:18:48.358472 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:18:48 crc kubenswrapper[5010]: I0203 10:18:48.384600 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" podStartSLOduration=2.461425962 podStartE2EDuration="5.384574673s" podCreationTimestamp="2026-02-03 10:18:43 +0000 UTC" firstStartedPulling="2026-02-03 10:18:44.663275716 +0000 UTC m=+994.819251845" lastFinishedPulling="2026-02-03 10:18:47.586424427 +0000 UTC m=+997.742400556" observedRunningTime="2026-02-03 10:18:48.380447987 +0000 UTC m=+998.536424126" watchObservedRunningTime="2026-02-03 10:18:48.384574673 +0000 UTC m=+998.540550802" Feb 03 10:18:50 crc kubenswrapper[5010]: I0203 10:18:50.375592 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" event={"ID":"d90f33c9-1c81-4b74-a905-71aed9ecf222","Type":"ContainerStarted","Data":"c05cac75c128c1602ab7126d8350064fe25bdf02927bbdfc0099644847764635"} Feb 03 10:18:50 crc kubenswrapper[5010]: I0203 10:18:50.375954 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:18:50 crc kubenswrapper[5010]: I0203 10:18:50.398394 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" podStartSLOduration=1.930403944 podStartE2EDuration="6.398369243s" podCreationTimestamp="2026-02-03 10:18:44 +0000 UTC" firstStartedPulling="2026-02-03 10:18:44.954523525 +0000 UTC m=+995.110499654" lastFinishedPulling="2026-02-03 10:18:49.422488814 +0000 UTC m=+999.578464953" observedRunningTime="2026-02-03 10:18:50.395086019 +0000 UTC m=+1000.551062158" watchObservedRunningTime="2026-02-03 10:18:50.398369243 +0000 UTC m=+1000.554345382" Feb 03 10:19:04 crc kubenswrapper[5010]: I0203 10:19:04.694185 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:19:24 crc kubenswrapper[5010]: I0203 10:19:24.094284 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:19:24 crc kubenswrapper[5010]: I0203 10:19:24.913957 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-2lwr2"] Feb 03 10:19:24 crc kubenswrapper[5010]: I0203 10:19:24.916948 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:24 crc kubenswrapper[5010]: I0203 10:19:24.922633 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 03 10:19:24 crc kubenswrapper[5010]: I0203 10:19:24.922866 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 03 10:19:24 crc kubenswrapper[5010]: I0203 10:19:24.922970 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-22lgm" Feb 03 10:19:24 crc kubenswrapper[5010]: I0203 10:19:24.934785 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw"] Feb 03 10:19:24 crc kubenswrapper[5010]: I0203 10:19:24.936154 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" Feb 03 10:19:24 crc kubenswrapper[5010]: I0203 10:19:24.940621 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 03 10:19:24 crc kubenswrapper[5010]: I0203 10:19:24.965256 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw"] Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.051800 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-mlsql"] Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.052891 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.055412 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.056092 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.057405 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.057842 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-wg7nb" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.069751 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-lpqgh"] Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.070572 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.073488 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.077359 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6ea4a71-2a4d-48cd-9dda-ba453a1c8766-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-dbqxw\" (UID: \"f6ea4a71-2a4d-48cd-9dda-ba453a1c8766\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.077390 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78k6c\" (UniqueName: \"kubernetes.io/projected/f6ea4a71-2a4d-48cd-9dda-ba453a1c8766-kube-api-access-78k6c\") pod \"frr-k8s-webhook-server-7df86c4f6c-dbqxw\" (UID: \"f6ea4a71-2a4d-48cd-9dda-ba453a1c8766\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.077418 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-reloader\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.077439 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-frr-sockets\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.077473 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-metrics-certs\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.077491 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxx7s\" (UniqueName: \"kubernetes.io/projected/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-kube-api-access-qxx7s\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.077509 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-metrics\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.077526 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-frr-conf\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.077551 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-frr-startup\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.098315 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-lpqgh"] Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.178927 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/72e88a76-8c59-4d07-813e-d7d505d14c3b-metallb-excludel2\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.178976 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6ea4a71-2a4d-48cd-9dda-ba453a1c8766-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-dbqxw\" (UID: \"f6ea4a71-2a4d-48cd-9dda-ba453a1c8766\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.178998 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78k6c\" (UniqueName: \"kubernetes.io/projected/f6ea4a71-2a4d-48cd-9dda-ba453a1c8766-kube-api-access-78k6c\") pod \"frr-k8s-webhook-server-7df86c4f6c-dbqxw\" (UID: \"f6ea4a71-2a4d-48cd-9dda-ba453a1c8766\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179019 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wllv6\" (UniqueName: \"kubernetes.io/projected/72e88a76-8c59-4d07-813e-d7d505d14c3b-kube-api-access-wllv6\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179059 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19f856e9-2325-41eb-8ed3-4daff562e84a-metrics-certs\") pod \"controller-6968d8fdc4-lpqgh\" (UID: \"19f856e9-2325-41eb-8ed3-4daff562e84a\") " pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179081 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-reloader\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179101 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-frr-sockets\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179124 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjwwz\" (UniqueName: \"kubernetes.io/projected/19f856e9-2325-41eb-8ed3-4daff562e84a-kube-api-access-wjwwz\") pod \"controller-6968d8fdc4-lpqgh\" (UID: \"19f856e9-2325-41eb-8ed3-4daff562e84a\") " pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179148 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19f856e9-2325-41eb-8ed3-4daff562e84a-cert\") pod \"controller-6968d8fdc4-lpqgh\" (UID: \"19f856e9-2325-41eb-8ed3-4daff562e84a\") " pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179170 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-metrics-certs\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179191 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxx7s\" (UniqueName: \"kubernetes.io/projected/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-kube-api-access-qxx7s\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179207 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72e88a76-8c59-4d07-813e-d7d505d14c3b-metrics-certs\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179249 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-metrics\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179279 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-frr-conf\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179306 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72e88a76-8c59-4d07-813e-d7d505d14c3b-memberlist\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179334 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-frr-startup\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.180256 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-frr-startup\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.181450 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-metrics\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.181568 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-frr-conf\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.181625 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-reloader\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.181892 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-frr-sockets\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.188973 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-metrics-certs\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.189598 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6ea4a71-2a4d-48cd-9dda-ba453a1c8766-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-dbqxw\" (UID: \"f6ea4a71-2a4d-48cd-9dda-ba453a1c8766\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.201172 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78k6c\" (UniqueName: \"kubernetes.io/projected/f6ea4a71-2a4d-48cd-9dda-ba453a1c8766-kube-api-access-78k6c\") pod \"frr-k8s-webhook-server-7df86c4f6c-dbqxw\" (UID: \"f6ea4a71-2a4d-48cd-9dda-ba453a1c8766\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.221512 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxx7s\" (UniqueName: \"kubernetes.io/projected/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-kube-api-access-qxx7s\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.257375 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.270554 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.279963 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wllv6\" (UniqueName: \"kubernetes.io/projected/72e88a76-8c59-4d07-813e-d7d505d14c3b-kube-api-access-wllv6\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.280011 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19f856e9-2325-41eb-8ed3-4daff562e84a-metrics-certs\") pod \"controller-6968d8fdc4-lpqgh\" (UID: \"19f856e9-2325-41eb-8ed3-4daff562e84a\") " pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.280068 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjwwz\" (UniqueName: \"kubernetes.io/projected/19f856e9-2325-41eb-8ed3-4daff562e84a-kube-api-access-wjwwz\") pod \"controller-6968d8fdc4-lpqgh\" (UID: \"19f856e9-2325-41eb-8ed3-4daff562e84a\") " pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.280107 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19f856e9-2325-41eb-8ed3-4daff562e84a-cert\") pod \"controller-6968d8fdc4-lpqgh\" (UID: \"19f856e9-2325-41eb-8ed3-4daff562e84a\") " pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.280149 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72e88a76-8c59-4d07-813e-d7d505d14c3b-metrics-certs\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: E0203 10:19:25.280174 5010 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.280191 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72e88a76-8c59-4d07-813e-d7d505d14c3b-memberlist\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: E0203 10:19:25.280264 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19f856e9-2325-41eb-8ed3-4daff562e84a-metrics-certs podName:19f856e9-2325-41eb-8ed3-4daff562e84a nodeName:}" failed. No retries permitted until 2026-02-03 10:19:25.780240561 +0000 UTC m=+1035.936216690 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/19f856e9-2325-41eb-8ed3-4daff562e84a-metrics-certs") pod "controller-6968d8fdc4-lpqgh" (UID: "19f856e9-2325-41eb-8ed3-4daff562e84a") : secret "controller-certs-secret" not found Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.280286 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/72e88a76-8c59-4d07-813e-d7d505d14c3b-metallb-excludel2\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: E0203 10:19:25.280296 5010 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 03 10:19:25 crc kubenswrapper[5010]: E0203 10:19:25.280330 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72e88a76-8c59-4d07-813e-d7d505d14c3b-memberlist podName:72e88a76-8c59-4d07-813e-d7d505d14c3b nodeName:}" failed. No retries permitted until 2026-02-03 10:19:25.780317483 +0000 UTC m=+1035.936293622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/72e88a76-8c59-4d07-813e-d7d505d14c3b-memberlist") pod "speaker-mlsql" (UID: "72e88a76-8c59-4d07-813e-d7d505d14c3b") : secret "metallb-memberlist" not found Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.281048 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/72e88a76-8c59-4d07-813e-d7d505d14c3b-metallb-excludel2\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.282794 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.284618 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72e88a76-8c59-4d07-813e-d7d505d14c3b-metrics-certs\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.294301 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19f856e9-2325-41eb-8ed3-4daff562e84a-cert\") pod \"controller-6968d8fdc4-lpqgh\" (UID: \"19f856e9-2325-41eb-8ed3-4daff562e84a\") " pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.295493 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wllv6\" (UniqueName: \"kubernetes.io/projected/72e88a76-8c59-4d07-813e-d7d505d14c3b-kube-api-access-wllv6\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.299935 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjwwz\" (UniqueName: \"kubernetes.io/projected/19f856e9-2325-41eb-8ed3-4daff562e84a-kube-api-access-wjwwz\") pod \"controller-6968d8fdc4-lpqgh\" (UID: \"19f856e9-2325-41eb-8ed3-4daff562e84a\") " pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.788172 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72e88a76-8c59-4d07-813e-d7d505d14c3b-memberlist\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.788248 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19f856e9-2325-41eb-8ed3-4daff562e84a-metrics-certs\") pod \"controller-6968d8fdc4-lpqgh\" (UID: \"19f856e9-2325-41eb-8ed3-4daff562e84a\") " pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:25 crc kubenswrapper[5010]: E0203 10:19:25.788348 5010 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 03 10:19:25 crc kubenswrapper[5010]: E0203 10:19:25.788421 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72e88a76-8c59-4d07-813e-d7d505d14c3b-memberlist podName:72e88a76-8c59-4d07-813e-d7d505d14c3b nodeName:}" failed. No retries permitted until 2026-02-03 10:19:26.78840272 +0000 UTC m=+1036.944378849 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/72e88a76-8c59-4d07-813e-d7d505d14c3b-memberlist") pod "speaker-mlsql" (UID: "72e88a76-8c59-4d07-813e-d7d505d14c3b") : secret "metallb-memberlist" not found Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.793999 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19f856e9-2325-41eb-8ed3-4daff562e84a-metrics-certs\") pod \"controller-6968d8fdc4-lpqgh\" (UID: \"19f856e9-2325-41eb-8ed3-4daff562e84a\") " pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.794441 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw"] Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.912491 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" event={"ID":"f6ea4a71-2a4d-48cd-9dda-ba453a1c8766","Type":"ContainerStarted","Data":"a253028265a27ce0e11b3e3849e1a3ac3e9fde42fef061c1469257b50049e5a7"} Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.913391 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2lwr2" event={"ID":"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5","Type":"ContainerStarted","Data":"01453a2818ff94a8915f3e81e8de25511c89e4a9454eb648bd0e2f7af01cbae7"} Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.986095 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:26 crc kubenswrapper[5010]: I0203 10:19:26.201537 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-lpqgh"] Feb 03 10:19:26 crc kubenswrapper[5010]: W0203 10:19:26.210917 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19f856e9_2325_41eb_8ed3_4daff562e84a.slice/crio-9df4bf419d874cabf3eae1eaa610220c77222c7130a1f4414a4518089d6f716d WatchSource:0}: Error finding container 9df4bf419d874cabf3eae1eaa610220c77222c7130a1f4414a4518089d6f716d: Status 404 returned error can't find the container with id 9df4bf419d874cabf3eae1eaa610220c77222c7130a1f4414a4518089d6f716d Feb 03 10:19:26 crc kubenswrapper[5010]: I0203 10:19:26.802880 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72e88a76-8c59-4d07-813e-d7d505d14c3b-memberlist\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:26 crc kubenswrapper[5010]: I0203 10:19:26.807324 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72e88a76-8c59-4d07-813e-d7d505d14c3b-memberlist\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:26 crc kubenswrapper[5010]: I0203 10:19:26.872682 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-mlsql" Feb 03 10:19:26 crc kubenswrapper[5010]: W0203 10:19:26.895082 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72e88a76_8c59_4d07_813e_d7d505d14c3b.slice/crio-3485d30491a5e697838728824aeec50d9a29751e88e9143f609c70084c0bbf21 WatchSource:0}: Error finding container 3485d30491a5e697838728824aeec50d9a29751e88e9143f609c70084c0bbf21: Status 404 returned error can't find the container with id 3485d30491a5e697838728824aeec50d9a29751e88e9143f609c70084c0bbf21 Feb 03 10:19:26 crc kubenswrapper[5010]: I0203 10:19:26.920094 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-lpqgh" event={"ID":"19f856e9-2325-41eb-8ed3-4daff562e84a","Type":"ContainerStarted","Data":"f38b71a25ab14fa3e82a7778ddbb4430e03d64c773dc23f472818e0dff2e79a9"} Feb 03 10:19:26 crc kubenswrapper[5010]: I0203 10:19:26.920160 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-lpqgh" event={"ID":"19f856e9-2325-41eb-8ed3-4daff562e84a","Type":"ContainerStarted","Data":"f390d4927b128ff0cf6da15910b38388dbd985cf3049fd4c3f7a4e7957c17c12"} Feb 03 10:19:26 crc kubenswrapper[5010]: I0203 10:19:26.920186 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:26 crc kubenswrapper[5010]: I0203 10:19:26.920204 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-lpqgh" event={"ID":"19f856e9-2325-41eb-8ed3-4daff562e84a","Type":"ContainerStarted","Data":"9df4bf419d874cabf3eae1eaa610220c77222c7130a1f4414a4518089d6f716d"} Feb 03 10:19:26 crc kubenswrapper[5010]: I0203 10:19:26.920870 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-mlsql" event={"ID":"72e88a76-8c59-4d07-813e-d7d505d14c3b","Type":"ContainerStarted","Data":"3485d30491a5e697838728824aeec50d9a29751e88e9143f609c70084c0bbf21"} Feb 03 10:19:26 crc kubenswrapper[5010]: I0203 10:19:26.939824 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-lpqgh" podStartSLOduration=1.939804166 podStartE2EDuration="1.939804166s" podCreationTimestamp="2026-02-03 10:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:19:26.935759262 +0000 UTC m=+1037.091735421" watchObservedRunningTime="2026-02-03 10:19:26.939804166 +0000 UTC m=+1037.095780305" Feb 03 10:19:27 crc kubenswrapper[5010]: I0203 10:19:27.978038 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-mlsql" event={"ID":"72e88a76-8c59-4d07-813e-d7d505d14c3b","Type":"ContainerStarted","Data":"17ec44bd6f4c15bdda152c97fb08b1b6d4f4ffdce03bf0542268ec3e643b0d0c"} Feb 03 10:19:27 crc kubenswrapper[5010]: I0203 10:19:27.978079 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-mlsql" event={"ID":"72e88a76-8c59-4d07-813e-d7d505d14c3b","Type":"ContainerStarted","Data":"ee2dbe1e9eeca94b7f9b024f99d7761c6b2f63ca3871d8a2c84e4ece5c4a0858"} Feb 03 10:19:27 crc kubenswrapper[5010]: I0203 10:19:27.978103 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-mlsql" Feb 03 10:19:28 crc kubenswrapper[5010]: I0203 10:19:28.006987 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-mlsql" podStartSLOduration=3.006967799 podStartE2EDuration="3.006967799s" podCreationTimestamp="2026-02-03 10:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:19:28.002205177 +0000 UTC m=+1038.158181326" watchObservedRunningTime="2026-02-03 10:19:28.006967799 +0000 UTC m=+1038.162943928" Feb 03 10:19:38 crc kubenswrapper[5010]: I0203 10:19:38.111548 5010 generic.go:334] "Generic (PLEG): container finished" podID="4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5" containerID="eee945f5cb01663746714448a20c0735d4264b42915d138bc8ea2fe9b67de247" exitCode=0 Feb 03 10:19:38 crc kubenswrapper[5010]: I0203 10:19:38.111986 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2lwr2" event={"ID":"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5","Type":"ContainerDied","Data":"eee945f5cb01663746714448a20c0735d4264b42915d138bc8ea2fe9b67de247"} Feb 03 10:19:38 crc kubenswrapper[5010]: I0203 10:19:38.113993 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" event={"ID":"f6ea4a71-2a4d-48cd-9dda-ba453a1c8766","Type":"ContainerStarted","Data":"ba05f2744a466a2727a76e31377b4993405a89f3d817fb665106d2d3d0aeb271"} Feb 03 10:19:38 crc kubenswrapper[5010]: I0203 10:19:38.114251 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" Feb 03 10:19:38 crc kubenswrapper[5010]: I0203 10:19:38.160719 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" podStartSLOduration=2.864725262 podStartE2EDuration="14.160694551s" podCreationTimestamp="2026-02-03 10:19:24 +0000 UTC" firstStartedPulling="2026-02-03 10:19:25.804537354 +0000 UTC m=+1035.960513483" lastFinishedPulling="2026-02-03 10:19:37.100506643 +0000 UTC m=+1047.256482772" observedRunningTime="2026-02-03 10:19:38.154086861 +0000 UTC m=+1048.310063000" watchObservedRunningTime="2026-02-03 10:19:38.160694551 +0000 UTC m=+1048.316670680" Feb 03 10:19:39 crc kubenswrapper[5010]: I0203 10:19:39.121640 5010 generic.go:334] "Generic (PLEG): container finished" podID="4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5" containerID="18368ccf63f783db882a121c7b947b3387b300c8f7a80a947c097d8261fdb770" exitCode=0 Feb 03 10:19:39 crc kubenswrapper[5010]: I0203 10:19:39.121734 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2lwr2" event={"ID":"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5","Type":"ContainerDied","Data":"18368ccf63f783db882a121c7b947b3387b300c8f7a80a947c097d8261fdb770"} Feb 03 10:19:40 crc kubenswrapper[5010]: I0203 10:19:40.128460 5010 generic.go:334] "Generic (PLEG): container finished" podID="4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5" containerID="65eb5b187fb2b621b6369b286c1282184886349f4993b9fb3636ccf8920ff8d6" exitCode=0 Feb 03 10:19:40 crc kubenswrapper[5010]: I0203 10:19:40.128494 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2lwr2" event={"ID":"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5","Type":"ContainerDied","Data":"65eb5b187fb2b621b6369b286c1282184886349f4993b9fb3636ccf8920ff8d6"} Feb 03 10:19:41 crc kubenswrapper[5010]: I0203 10:19:41.140769 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2lwr2" event={"ID":"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5","Type":"ContainerStarted","Data":"3b9afa48db592eccb97b76872b31a36eb379d1c2ce8520af4f37e34f4b660c00"} Feb 03 10:19:41 crc kubenswrapper[5010]: I0203 10:19:41.141106 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:41 crc kubenswrapper[5010]: I0203 10:19:41.141122 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2lwr2" event={"ID":"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5","Type":"ContainerStarted","Data":"51c00d5c8ba6e4f4fac73ffaba6f4fcbd46576ac40fc873aa85d9674443a706b"} Feb 03 10:19:41 crc kubenswrapper[5010]: I0203 10:19:41.141140 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2lwr2" event={"ID":"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5","Type":"ContainerStarted","Data":"71344978daa2db95d6a18fce035d560708c4cd853cc315fb5a314ddb6a5d48b2"} Feb 03 10:19:41 crc kubenswrapper[5010]: I0203 10:19:41.141152 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2lwr2" event={"ID":"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5","Type":"ContainerStarted","Data":"3e42db12729f8deeed09fb29d587b16b00967c8c046e2fe546ae400778f92295"} Feb 03 10:19:41 crc kubenswrapper[5010]: I0203 10:19:41.141163 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2lwr2" event={"ID":"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5","Type":"ContainerStarted","Data":"aa4a1c721811ce88c6727b2b6f1831342546957b48a133194683d6e8edde97a2"} Feb 03 10:19:41 crc kubenswrapper[5010]: I0203 10:19:41.141175 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2lwr2" event={"ID":"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5","Type":"ContainerStarted","Data":"ed207a36434fbc2b0fdbb09b247f112be66dda6b02a88d08579a4b0cdd47c950"} Feb 03 10:19:45 crc kubenswrapper[5010]: I0203 10:19:45.257587 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:45 crc kubenswrapper[5010]: I0203 10:19:45.295740 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:45 crc kubenswrapper[5010]: I0203 10:19:45.319232 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-2lwr2" podStartSLOduration=9.712184375 podStartE2EDuration="21.319199901s" podCreationTimestamp="2026-02-03 10:19:24 +0000 UTC" firstStartedPulling="2026-02-03 10:19:25.477950798 +0000 UTC m=+1035.633926927" lastFinishedPulling="2026-02-03 10:19:37.084966324 +0000 UTC m=+1047.240942453" observedRunningTime="2026-02-03 10:19:41.167392072 +0000 UTC m=+1051.323368201" watchObservedRunningTime="2026-02-03 10:19:45.319199901 +0000 UTC m=+1055.475176030" Feb 03 10:19:45 crc kubenswrapper[5010]: I0203 10:19:45.989980 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:46 crc kubenswrapper[5010]: I0203 10:19:46.876147 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-mlsql" Feb 03 10:19:49 crc kubenswrapper[5010]: I0203 10:19:49.962785 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-58tlq"] Feb 03 10:19:49 crc kubenswrapper[5010]: I0203 10:19:49.963608 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-58tlq" Feb 03 10:19:49 crc kubenswrapper[5010]: I0203 10:19:49.966115 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-5qw2t" Feb 03 10:19:49 crc kubenswrapper[5010]: I0203 10:19:49.966188 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 03 10:19:49 crc kubenswrapper[5010]: I0203 10:19:49.969407 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 03 10:19:49 crc kubenswrapper[5010]: I0203 10:19:49.982512 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-58tlq"] Feb 03 10:19:50 crc kubenswrapper[5010]: I0203 10:19:50.154424 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jzmc\" (UniqueName: \"kubernetes.io/projected/27e02f08-a8b7-490f-a26c-2a5aa6af0ad1-kube-api-access-8jzmc\") pod \"openstack-operator-index-58tlq\" (UID: \"27e02f08-a8b7-490f-a26c-2a5aa6af0ad1\") " pod="openstack-operators/openstack-operator-index-58tlq" Feb 03 10:19:50 crc kubenswrapper[5010]: I0203 10:19:50.265769 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jzmc\" (UniqueName: \"kubernetes.io/projected/27e02f08-a8b7-490f-a26c-2a5aa6af0ad1-kube-api-access-8jzmc\") pod \"openstack-operator-index-58tlq\" (UID: \"27e02f08-a8b7-490f-a26c-2a5aa6af0ad1\") " pod="openstack-operators/openstack-operator-index-58tlq" Feb 03 10:19:50 crc kubenswrapper[5010]: I0203 10:19:50.284081 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jzmc\" (UniqueName: \"kubernetes.io/projected/27e02f08-a8b7-490f-a26c-2a5aa6af0ad1-kube-api-access-8jzmc\") pod \"openstack-operator-index-58tlq\" (UID: \"27e02f08-a8b7-490f-a26c-2a5aa6af0ad1\") " pod="openstack-operators/openstack-operator-index-58tlq" Feb 03 10:19:50 crc kubenswrapper[5010]: I0203 10:19:50.581208 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-58tlq" Feb 03 10:19:50 crc kubenswrapper[5010]: I0203 10:19:50.989648 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-58tlq"] Feb 03 10:19:51 crc kubenswrapper[5010]: I0203 10:19:51.199423 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-58tlq" event={"ID":"27e02f08-a8b7-490f-a26c-2a5aa6af0ad1","Type":"ContainerStarted","Data":"5e677639a6c97370081222296cbb2e0a8d8af6746b719c225659bc34635fbb81"} Feb 03 10:19:53 crc kubenswrapper[5010]: I0203 10:19:53.340289 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-58tlq"] Feb 03 10:19:53 crc kubenswrapper[5010]: I0203 10:19:53.958377 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-fv5km"] Feb 03 10:19:53 crc kubenswrapper[5010]: I0203 10:19:53.959568 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fv5km" Feb 03 10:19:53 crc kubenswrapper[5010]: I0203 10:19:53.965882 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fv5km"] Feb 03 10:19:54 crc kubenswrapper[5010]: I0203 10:19:54.015352 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v57r2\" (UniqueName: \"kubernetes.io/projected/1e93c0a0-5a7b-40d7-aaee-e31455baf139-kube-api-access-v57r2\") pod \"openstack-operator-index-fv5km\" (UID: \"1e93c0a0-5a7b-40d7-aaee-e31455baf139\") " pod="openstack-operators/openstack-operator-index-fv5km" Feb 03 10:19:54 crc kubenswrapper[5010]: I0203 10:19:54.116978 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v57r2\" (UniqueName: \"kubernetes.io/projected/1e93c0a0-5a7b-40d7-aaee-e31455baf139-kube-api-access-v57r2\") pod \"openstack-operator-index-fv5km\" (UID: \"1e93c0a0-5a7b-40d7-aaee-e31455baf139\") " pod="openstack-operators/openstack-operator-index-fv5km" Feb 03 10:19:54 crc kubenswrapper[5010]: I0203 10:19:54.142373 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v57r2\" (UniqueName: \"kubernetes.io/projected/1e93c0a0-5a7b-40d7-aaee-e31455baf139-kube-api-access-v57r2\") pod \"openstack-operator-index-fv5km\" (UID: \"1e93c0a0-5a7b-40d7-aaee-e31455baf139\") " pod="openstack-operators/openstack-operator-index-fv5km" Feb 03 10:19:54 crc kubenswrapper[5010]: I0203 10:19:54.219322 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-58tlq" event={"ID":"27e02f08-a8b7-490f-a26c-2a5aa6af0ad1","Type":"ContainerStarted","Data":"bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c"} Feb 03 10:19:54 crc kubenswrapper[5010]: I0203 10:19:54.219432 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-58tlq" podUID="27e02f08-a8b7-490f-a26c-2a5aa6af0ad1" containerName="registry-server" containerID="cri-o://bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c" gracePeriod=2 Feb 03 10:19:54 crc kubenswrapper[5010]: I0203 10:19:54.238615 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-58tlq" podStartSLOduration=2.19065542 podStartE2EDuration="5.23859685s" podCreationTimestamp="2026-02-03 10:19:49 +0000 UTC" firstStartedPulling="2026-02-03 10:19:50.998471188 +0000 UTC m=+1061.154447317" lastFinishedPulling="2026-02-03 10:19:54.046412618 +0000 UTC m=+1064.202388747" observedRunningTime="2026-02-03 10:19:54.236556847 +0000 UTC m=+1064.392532976" watchObservedRunningTime="2026-02-03 10:19:54.23859685 +0000 UTC m=+1064.394572979" Feb 03 10:19:54 crc kubenswrapper[5010]: I0203 10:19:54.282599 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fv5km" Feb 03 10:19:54 crc kubenswrapper[5010]: I0203 10:19:54.513174 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fv5km"] Feb 03 10:19:54 crc kubenswrapper[5010]: I0203 10:19:54.669474 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-58tlq" Feb 03 10:19:54 crc kubenswrapper[5010]: I0203 10:19:54.826241 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jzmc\" (UniqueName: \"kubernetes.io/projected/27e02f08-a8b7-490f-a26c-2a5aa6af0ad1-kube-api-access-8jzmc\") pod \"27e02f08-a8b7-490f-a26c-2a5aa6af0ad1\" (UID: \"27e02f08-a8b7-490f-a26c-2a5aa6af0ad1\") " Feb 03 10:19:54 crc kubenswrapper[5010]: I0203 10:19:54.832565 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27e02f08-a8b7-490f-a26c-2a5aa6af0ad1-kube-api-access-8jzmc" (OuterVolumeSpecName: "kube-api-access-8jzmc") pod "27e02f08-a8b7-490f-a26c-2a5aa6af0ad1" (UID: "27e02f08-a8b7-490f-a26c-2a5aa6af0ad1"). InnerVolumeSpecName "kube-api-access-8jzmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:19:54 crc kubenswrapper[5010]: I0203 10:19:54.927753 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jzmc\" (UniqueName: \"kubernetes.io/projected/27e02f08-a8b7-490f-a26c-2a5aa6af0ad1-kube-api-access-8jzmc\") on node \"crc\" DevicePath \"\"" Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.226936 5010 generic.go:334] "Generic (PLEG): container finished" podID="27e02f08-a8b7-490f-a26c-2a5aa6af0ad1" containerID="bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c" exitCode=0 Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.226988 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-58tlq" event={"ID":"27e02f08-a8b7-490f-a26c-2a5aa6af0ad1","Type":"ContainerDied","Data":"bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c"} Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.227011 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-58tlq" Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.227028 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-58tlq" event={"ID":"27e02f08-a8b7-490f-a26c-2a5aa6af0ad1","Type":"ContainerDied","Data":"5e677639a6c97370081222296cbb2e0a8d8af6746b719c225659bc34635fbb81"} Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.227045 5010 scope.go:117] "RemoveContainer" containerID="bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c" Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.229118 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fv5km" event={"ID":"1e93c0a0-5a7b-40d7-aaee-e31455baf139","Type":"ContainerStarted","Data":"062ce5e416a0048c6fe820619953bcbc43eac0ccba4550cb07947408bb005877"} Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.229151 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fv5km" event={"ID":"1e93c0a0-5a7b-40d7-aaee-e31455baf139","Type":"ContainerStarted","Data":"631725c7047fa1106af2d95e1b032c3ec5c9c17ad929d8a7b1babf104903e323"} Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.245371 5010 scope.go:117] "RemoveContainer" containerID="bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c" Feb 03 10:19:55 crc kubenswrapper[5010]: E0203 10:19:55.246141 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c\": container with ID starting with bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c not found: ID does not exist" containerID="bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c" Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.246234 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c"} err="failed to get container status \"bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c\": rpc error: code = NotFound desc = could not find container \"bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c\": container with ID starting with bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c not found: ID does not exist" Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.250778 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-fv5km" podStartSLOduration=2.035012019 podStartE2EDuration="2.250754095s" podCreationTimestamp="2026-02-03 10:19:53 +0000 UTC" firstStartedPulling="2026-02-03 10:19:54.525823981 +0000 UTC m=+1064.681800110" lastFinishedPulling="2026-02-03 10:19:54.741566057 +0000 UTC m=+1064.897542186" observedRunningTime="2026-02-03 10:19:55.245148241 +0000 UTC m=+1065.401124390" watchObservedRunningTime="2026-02-03 10:19:55.250754095 +0000 UTC m=+1065.406730254" Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.263689 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.269162 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-58tlq"] Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.273064 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-58tlq"] Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.274459 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" Feb 03 10:19:56 crc kubenswrapper[5010]: I0203 10:19:56.509152 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27e02f08-a8b7-490f-a26c-2a5aa6af0ad1" path="/var/lib/kubelet/pods/27e02f08-a8b7-490f-a26c-2a5aa6af0ad1/volumes" Feb 03 10:20:04 crc kubenswrapper[5010]: I0203 10:20:04.283468 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-fv5km" Feb 03 10:20:04 crc kubenswrapper[5010]: I0203 10:20:04.283966 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-fv5km" Feb 03 10:20:04 crc kubenswrapper[5010]: I0203 10:20:04.316651 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-fv5km" Feb 03 10:20:04 crc kubenswrapper[5010]: I0203 10:20:04.346939 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-fv5km" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.187823 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc"] Feb 03 10:20:06 crc kubenswrapper[5010]: E0203 10:20:06.188384 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e02f08-a8b7-490f-a26c-2a5aa6af0ad1" containerName="registry-server" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.188401 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e02f08-a8b7-490f-a26c-2a5aa6af0ad1" containerName="registry-server" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.188527 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e02f08-a8b7-490f-a26c-2a5aa6af0ad1" containerName="registry-server" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.189350 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.191188 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-9977h" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.200157 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc"] Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.299457 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-bundle\") pod \"2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc\" (UID: \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\") " pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.299514 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-util\") pod \"2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc\" (UID: \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\") " pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.299571 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fcr8\" (UniqueName: \"kubernetes.io/projected/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-kube-api-access-2fcr8\") pod \"2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc\" (UID: \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\") " pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.400508 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-bundle\") pod \"2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc\" (UID: \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\") " pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.400563 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-util\") pod \"2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc\" (UID: \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\") " pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.400606 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fcr8\" (UniqueName: \"kubernetes.io/projected/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-kube-api-access-2fcr8\") pod \"2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc\" (UID: \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\") " pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.401150 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-util\") pod \"2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc\" (UID: \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\") " pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.402778 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-bundle\") pod \"2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc\" (UID: \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\") " pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.419011 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fcr8\" (UniqueName: \"kubernetes.io/projected/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-kube-api-access-2fcr8\") pod \"2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc\" (UID: \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\") " pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.532565 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.935560 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc"] Feb 03 10:20:07 crc kubenswrapper[5010]: I0203 10:20:07.312324 5010 generic.go:334] "Generic (PLEG): container finished" podID="878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" containerID="72abbe53ef303c966dac97295039fd50d30e9f313ab1eb51a686e38c86ad29bf" exitCode=0 Feb 03 10:20:07 crc kubenswrapper[5010]: I0203 10:20:07.312415 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" event={"ID":"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb","Type":"ContainerDied","Data":"72abbe53ef303c966dac97295039fd50d30e9f313ab1eb51a686e38c86ad29bf"} Feb 03 10:20:07 crc kubenswrapper[5010]: I0203 10:20:07.312674 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" event={"ID":"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb","Type":"ContainerStarted","Data":"7d64426a1c5618ac69d74890d5ab09299f87b0d7ca2ece50947215f9f2159ac5"} Feb 03 10:20:08 crc kubenswrapper[5010]: I0203 10:20:08.320399 5010 generic.go:334] "Generic (PLEG): container finished" podID="878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" containerID="15ba1fb969009e5814cbdceceaf66ba33621a230a92dd50a4bf7e769958bf10f" exitCode=0 Feb 03 10:20:08 crc kubenswrapper[5010]: I0203 10:20:08.320501 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" event={"ID":"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb","Type":"ContainerDied","Data":"15ba1fb969009e5814cbdceceaf66ba33621a230a92dd50a4bf7e769958bf10f"} Feb 03 10:20:09 crc kubenswrapper[5010]: I0203 10:20:09.330785 5010 generic.go:334] "Generic (PLEG): container finished" podID="878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" containerID="94438307668eb53c5f5445f671fd9a1bcebd80dfe6d4f4a5a3e39c52ce3f74fd" exitCode=0 Feb 03 10:20:09 crc kubenswrapper[5010]: I0203 10:20:09.330842 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" event={"ID":"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb","Type":"ContainerDied","Data":"94438307668eb53c5f5445f671fd9a1bcebd80dfe6d4f4a5a3e39c52ce3f74fd"} Feb 03 10:20:10 crc kubenswrapper[5010]: I0203 10:20:10.591062 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:10 crc kubenswrapper[5010]: I0203 10:20:10.658863 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-bundle\") pod \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\" (UID: \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\") " Feb 03 10:20:10 crc kubenswrapper[5010]: I0203 10:20:10.659194 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-util\") pod \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\" (UID: \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\") " Feb 03 10:20:10 crc kubenswrapper[5010]: I0203 10:20:10.659331 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fcr8\" (UniqueName: \"kubernetes.io/projected/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-kube-api-access-2fcr8\") pod \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\" (UID: \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\") " Feb 03 10:20:10 crc kubenswrapper[5010]: I0203 10:20:10.659707 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-bundle" (OuterVolumeSpecName: "bundle") pod "878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" (UID: "878224e8-6bbb-4b7f-9aff-b2bf21eef4bb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:20:10 crc kubenswrapper[5010]: I0203 10:20:10.664265 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-kube-api-access-2fcr8" (OuterVolumeSpecName: "kube-api-access-2fcr8") pod "878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" (UID: "878224e8-6bbb-4b7f-9aff-b2bf21eef4bb"). InnerVolumeSpecName "kube-api-access-2fcr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:20:10 crc kubenswrapper[5010]: I0203 10:20:10.675031 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-util" (OuterVolumeSpecName: "util") pod "878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" (UID: "878224e8-6bbb-4b7f-9aff-b2bf21eef4bb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:20:10 crc kubenswrapper[5010]: I0203 10:20:10.761242 5010 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:20:10 crc kubenswrapper[5010]: I0203 10:20:10.761287 5010 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-util\") on node \"crc\" DevicePath \"\"" Feb 03 10:20:10 crc kubenswrapper[5010]: I0203 10:20:10.761297 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fcr8\" (UniqueName: \"kubernetes.io/projected/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-kube-api-access-2fcr8\") on node \"crc\" DevicePath \"\"" Feb 03 10:20:11 crc kubenswrapper[5010]: I0203 10:20:11.346957 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" event={"ID":"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb","Type":"ContainerDied","Data":"7d64426a1c5618ac69d74890d5ab09299f87b0d7ca2ece50947215f9f2159ac5"} Feb 03 10:20:11 crc kubenswrapper[5010]: I0203 10:20:11.346994 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:11 crc kubenswrapper[5010]: I0203 10:20:11.347007 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d64426a1c5618ac69d74890d5ab09299f87b0d7ca2ece50947215f9f2159ac5" Feb 03 10:20:16 crc kubenswrapper[5010]: I0203 10:20:16.389824 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:20:16 crc kubenswrapper[5010]: I0203 10:20:16.390395 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.184180 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2"] Feb 03 10:20:18 crc kubenswrapper[5010]: E0203 10:20:18.184803 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" containerName="pull" Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.184817 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" containerName="pull" Feb 03 10:20:18 crc kubenswrapper[5010]: E0203 10:20:18.184840 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" containerName="util" Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.184847 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" containerName="util" Feb 03 10:20:18 crc kubenswrapper[5010]: E0203 10:20:18.184860 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" containerName="extract" Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.184867 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" containerName="extract" Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.185003 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" containerName="extract" Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.185485 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2" Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.189476 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-2kgrw" Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.227095 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2"] Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.265592 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfd89\" (UniqueName: \"kubernetes.io/projected/bde44bc9-c06a-4c2b-aad8-6f3247272024-kube-api-access-pfd89\") pod \"openstack-operator-controller-init-578f994c6c-72ld2\" (UID: \"bde44bc9-c06a-4c2b-aad8-6f3247272024\") " pod="openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2" Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.366589 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfd89\" (UniqueName: \"kubernetes.io/projected/bde44bc9-c06a-4c2b-aad8-6f3247272024-kube-api-access-pfd89\") pod \"openstack-operator-controller-init-578f994c6c-72ld2\" (UID: \"bde44bc9-c06a-4c2b-aad8-6f3247272024\") " pod="openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2" Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.388447 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfd89\" (UniqueName: \"kubernetes.io/projected/bde44bc9-c06a-4c2b-aad8-6f3247272024-kube-api-access-pfd89\") pod \"openstack-operator-controller-init-578f994c6c-72ld2\" (UID: \"bde44bc9-c06a-4c2b-aad8-6f3247272024\") " pod="openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2" Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.503540 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2" Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.982575 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2"] Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.991175 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 10:20:19 crc kubenswrapper[5010]: I0203 10:20:19.413977 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2" event={"ID":"bde44bc9-c06a-4c2b-aad8-6f3247272024","Type":"ContainerStarted","Data":"0bebbf9909ef02daaa1533195d95da469593d888464f80ed7cf687d6aa5f592f"} Feb 03 10:20:27 crc kubenswrapper[5010]: I0203 10:20:27.488571 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2" event={"ID":"bde44bc9-c06a-4c2b-aad8-6f3247272024","Type":"ContainerStarted","Data":"981b2e22c7badf0ca3652cd4319b877b8391ab2b738289eb3dbf54c4ef99062b"} Feb 03 10:20:27 crc kubenswrapper[5010]: I0203 10:20:27.490131 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2" Feb 03 10:20:27 crc kubenswrapper[5010]: I0203 10:20:27.523323 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2" podStartSLOduration=1.360225965 podStartE2EDuration="9.523301904s" podCreationTimestamp="2026-02-03 10:20:18 +0000 UTC" firstStartedPulling="2026-02-03 10:20:18.990825234 +0000 UTC m=+1089.146801363" lastFinishedPulling="2026-02-03 10:20:27.153901173 +0000 UTC m=+1097.309877302" observedRunningTime="2026-02-03 10:20:27.521129338 +0000 UTC m=+1097.677105467" watchObservedRunningTime="2026-02-03 10:20:27.523301904 +0000 UTC m=+1097.679278033" Feb 03 10:20:38 crc kubenswrapper[5010]: I0203 10:20:38.510521 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2" Feb 03 10:20:46 crc kubenswrapper[5010]: I0203 10:20:46.397873 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:20:46 crc kubenswrapper[5010]: I0203 10:20:46.398564 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.086702 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.088378 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.091873 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-lvq9v" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.092100 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.093037 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.094915 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-x5txp" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.097967 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.101554 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.143034 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.143834 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.150593 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-wlxnv" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.167304 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.168650 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.177824 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-t2hc2" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.182305 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.198452 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.209474 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.210742 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.213681 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-8ffcr" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.230664 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.231535 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.235421 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-67qfn" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.244606 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.251006 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b44v\" (UniqueName: \"kubernetes.io/projected/a7d72ea1-7126-4768-9cf8-f590ebd216d7-kube-api-access-2b44v\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-52g72\" (UID: \"a7d72ea1-7126-4768-9cf8-f590ebd216d7\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.251067 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn9q2\" (UniqueName: \"kubernetes.io/projected/9fa8a872-8dc5-4e6d-838a-5dc54e6d4bbe-kube-api-access-nn9q2\") pod \"glance-operator-controller-manager-8886f4c47-gnxws\" (UID: \"9fa8a872-8dc5-4e6d-838a-5dc54e6d4bbe\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.251090 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmjvd\" (UniqueName: \"kubernetes.io/projected/74803e29-48a3-4667-bcdb-a94f381545b5-kube-api-access-dmjvd\") pod \"cinder-operator-controller-manager-8d874c8fc-jvb56\" (UID: \"74803e29-48a3-4667-bcdb-a94f381545b5\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.251118 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6zg2\" (UniqueName: \"kubernetes.io/projected/fd413d86-2cda-4079-a895-5cb60928a47f-kube-api-access-l6zg2\") pod \"designate-operator-controller-manager-6d9697b7f4-j87lc\" (UID: \"fd413d86-2cda-4079-a895-5cb60928a47f\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.263289 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.264270 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.273588 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.273681 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qfj78" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.292711 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.293596 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.297244 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-556xw" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.305687 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.320050 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.321053 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.324242 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-kk5q5" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.336734 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.352922 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert\") pod \"infra-operator-controller-manager-79955696d6-vlmtm\" (UID: \"5fafda3f-e0cd-4477-9c10-442af83a835b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.352987 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dx96\" (UniqueName: \"kubernetes.io/projected/9dc494bd-d6ef-4a22-8312-67750ebb3dbe-kube-api-access-6dx96\") pod \"horizon-operator-controller-manager-5fb775575f-k765q\" (UID: \"9dc494bd-d6ef-4a22-8312-67750ebb3dbe\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.353023 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b44v\" (UniqueName: \"kubernetes.io/projected/a7d72ea1-7126-4768-9cf8-f590ebd216d7-kube-api-access-2b44v\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-52g72\" (UID: \"a7d72ea1-7126-4768-9cf8-f590ebd216d7\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.353070 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzmjn\" (UniqueName: \"kubernetes.io/projected/5fafda3f-e0cd-4477-9c10-442af83a835b-kube-api-access-nzmjn\") pod \"infra-operator-controller-manager-79955696d6-vlmtm\" (UID: \"5fafda3f-e0cd-4477-9c10-442af83a835b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.353109 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn9q2\" (UniqueName: \"kubernetes.io/projected/9fa8a872-8dc5-4e6d-838a-5dc54e6d4bbe-kube-api-access-nn9q2\") pod \"glance-operator-controller-manager-8886f4c47-gnxws\" (UID: \"9fa8a872-8dc5-4e6d-838a-5dc54e6d4bbe\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.353149 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmjvd\" (UniqueName: \"kubernetes.io/projected/74803e29-48a3-4667-bcdb-a94f381545b5-kube-api-access-dmjvd\") pod \"cinder-operator-controller-manager-8d874c8fc-jvb56\" (UID: \"74803e29-48a3-4667-bcdb-a94f381545b5\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.353195 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6zg2\" (UniqueName: \"kubernetes.io/projected/fd413d86-2cda-4079-a895-5cb60928a47f-kube-api-access-l6zg2\") pod \"designate-operator-controller-manager-6d9697b7f4-j87lc\" (UID: \"fd413d86-2cda-4079-a895-5cb60928a47f\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.353614 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khfqw\" (UniqueName: \"kubernetes.io/projected/d33dc0fd-847b-41cc-a8ac-afde40120ba2-kube-api-access-khfqw\") pod \"heat-operator-controller-manager-69d6db494d-7szqs\" (UID: \"d33dc0fd-847b-41cc-a8ac-afde40120ba2\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.361781 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.389130 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn9q2\" (UniqueName: \"kubernetes.io/projected/9fa8a872-8dc5-4e6d-838a-5dc54e6d4bbe-kube-api-access-nn9q2\") pod \"glance-operator-controller-manager-8886f4c47-gnxws\" (UID: \"9fa8a872-8dc5-4e6d-838a-5dc54e6d4bbe\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.393382 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.394574 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6zg2\" (UniqueName: \"kubernetes.io/projected/fd413d86-2cda-4079-a895-5cb60928a47f-kube-api-access-l6zg2\") pod \"designate-operator-controller-manager-6d9697b7f4-j87lc\" (UID: \"fd413d86-2cda-4079-a895-5cb60928a47f\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.396000 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b44v\" (UniqueName: \"kubernetes.io/projected/a7d72ea1-7126-4768-9cf8-f590ebd216d7-kube-api-access-2b44v\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-52g72\" (UID: \"a7d72ea1-7126-4768-9cf8-f590ebd216d7\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.400615 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmjvd\" (UniqueName: \"kubernetes.io/projected/74803e29-48a3-4667-bcdb-a94f381545b5-kube-api-access-dmjvd\") pod \"cinder-operator-controller-manager-8d874c8fc-jvb56\" (UID: \"74803e29-48a3-4667-bcdb-a94f381545b5\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.402504 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.403201 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.418076 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.418684 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-bw698" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.426655 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.426980 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.435350 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.436320 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.444231 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.445072 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-mwbcv" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.457560 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khfqw\" (UniqueName: \"kubernetes.io/projected/d33dc0fd-847b-41cc-a8ac-afde40120ba2-kube-api-access-khfqw\") pod \"heat-operator-controller-manager-69d6db494d-7szqs\" (UID: \"d33dc0fd-847b-41cc-a8ac-afde40120ba2\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.459266 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert\") pod \"infra-operator-controller-manager-79955696d6-vlmtm\" (UID: \"5fafda3f-e0cd-4477-9c10-442af83a835b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.459466 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dx96\" (UniqueName: \"kubernetes.io/projected/9dc494bd-d6ef-4a22-8312-67750ebb3dbe-kube-api-access-6dx96\") pod \"horizon-operator-controller-manager-5fb775575f-k765q\" (UID: \"9dc494bd-d6ef-4a22-8312-67750ebb3dbe\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.459644 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzmjn\" (UniqueName: \"kubernetes.io/projected/5fafda3f-e0cd-4477-9c10-442af83a835b-kube-api-access-nzmjn\") pod \"infra-operator-controller-manager-79955696d6-vlmtm\" (UID: \"5fafda3f-e0cd-4477-9c10-442af83a835b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.459797 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vghr\" (UniqueName: \"kubernetes.io/projected/2f204595-5d98-4c16-b5d1-5004c6cae836-kube-api-access-4vghr\") pod \"ironic-operator-controller-manager-5f4b8bd54d-w7ldz\" (UID: \"2f204595-5d98-4c16-b5d1-5004c6cae836\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.459960 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k69sw\" (UniqueName: \"kubernetes.io/projected/1a136ea1-ab68-4f60-8fb2-969363f25337-kube-api-access-k69sw\") pod \"keystone-operator-controller-manager-84f48565d4-gb8tp\" (UID: \"1a136ea1-ab68-4f60-8fb2-969363f25337\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp" Feb 03 10:20:57 crc kubenswrapper[5010]: E0203 10:20:57.461082 5010 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 10:20:57 crc kubenswrapper[5010]: E0203 10:20:57.461148 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert podName:5fafda3f-e0cd-4477-9c10-442af83a835b nodeName:}" failed. No retries permitted until 2026-02-03 10:20:57.961130831 +0000 UTC m=+1128.117106960 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert") pod "infra-operator-controller-manager-79955696d6-vlmtm" (UID: "5fafda3f-e0cd-4477-9c10-442af83a835b") : secret "infra-operator-webhook-server-cert" not found Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.479673 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.503336 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.506740 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.510564 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khfqw\" (UniqueName: \"kubernetes.io/projected/d33dc0fd-847b-41cc-a8ac-afde40120ba2-kube-api-access-khfqw\") pod \"heat-operator-controller-manager-69d6db494d-7szqs\" (UID: \"d33dc0fd-847b-41cc-a8ac-afde40120ba2\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.518368 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzmjn\" (UniqueName: \"kubernetes.io/projected/5fafda3f-e0cd-4477-9c10-442af83a835b-kube-api-access-nzmjn\") pod \"infra-operator-controller-manager-79955696d6-vlmtm\" (UID: \"5fafda3f-e0cd-4477-9c10-442af83a835b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.537948 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dx96\" (UniqueName: \"kubernetes.io/projected/9dc494bd-d6ef-4a22-8312-67750ebb3dbe-kube-api-access-6dx96\") pod \"horizon-operator-controller-manager-5fb775575f-k765q\" (UID: \"9dc494bd-d6ef-4a22-8312-67750ebb3dbe\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.593940 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.594589 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.595613 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.596786 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.598462 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-vhk6m" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.598614 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64lls\" (UniqueName: \"kubernetes.io/projected/7f20ca5f-d244-45be-864d-3b8ad3d456ea-kube-api-access-64lls\") pod \"manila-operator-controller-manager-7dd968899f-qrkwl\" (UID: \"7f20ca5f-d244-45be-864d-3b8ad3d456ea\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.598682 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vghr\" (UniqueName: \"kubernetes.io/projected/2f204595-5d98-4c16-b5d1-5004c6cae836-kube-api-access-4vghr\") pod \"ironic-operator-controller-manager-5f4b8bd54d-w7ldz\" (UID: \"2f204595-5d98-4c16-b5d1-5004c6cae836\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.598717 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47896\" (UniqueName: \"kubernetes.io/projected/42f76062-3a9d-45c1-b928-d9ca236ec8ab-kube-api-access-47896\") pod \"mariadb-operator-controller-manager-67bf948998-5zbbw\" (UID: \"42f76062-3a9d-45c1-b928-d9ca236ec8ab\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.598751 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k69sw\" (UniqueName: \"kubernetes.io/projected/1a136ea1-ab68-4f60-8fb2-969363f25337-kube-api-access-k69sw\") pod \"keystone-operator-controller-manager-84f48565d4-gb8tp\" (UID: \"1a136ea1-ab68-4f60-8fb2-969363f25337\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.601521 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.602731 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-lr6qh" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.608488 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.609763 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.614241 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-dl88t" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.619055 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.626979 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vghr\" (UniqueName: \"kubernetes.io/projected/2f204595-5d98-4c16-b5d1-5004c6cae836-kube-api-access-4vghr\") pod \"ironic-operator-controller-manager-5f4b8bd54d-w7ldz\" (UID: \"2f204595-5d98-4c16-b5d1-5004c6cae836\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.629903 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k69sw\" (UniqueName: \"kubernetes.io/projected/1a136ea1-ab68-4f60-8fb2-969363f25337-kube-api-access-k69sw\") pod \"keystone-operator-controller-manager-84f48565d4-gb8tp\" (UID: \"1a136ea1-ab68-4f60-8fb2-969363f25337\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.636725 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.645713 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.650077 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.660294 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.662451 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.665135 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-bqqr5" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.666282 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.673955 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.699562 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26tml\" (UniqueName: \"kubernetes.io/projected/21f46dec-fb01-4293-ad08-706eb63a8738-kube-api-access-26tml\") pod \"nova-operator-controller-manager-55bff696bd-t47jc\" (UID: \"21f46dec-fb01-4293-ad08-706eb63a8738\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.699617 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47896\" (UniqueName: \"kubernetes.io/projected/42f76062-3a9d-45c1-b928-d9ca236ec8ab-kube-api-access-47896\") pod \"mariadb-operator-controller-manager-67bf948998-5zbbw\" (UID: \"42f76062-3a9d-45c1-b928-d9ca236ec8ab\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.699686 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znfrh\" (UniqueName: \"kubernetes.io/projected/27ab6ab7-e411-466c-bc4a-97d1660c547e-kube-api-access-znfrh\") pod \"octavia-operator-controller-manager-6687f8d877-5lzr6\" (UID: \"27ab6ab7-e411-466c-bc4a-97d1660c547e\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.699789 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mblb\" (UniqueName: \"kubernetes.io/projected/4f112d60-8db7-4ec2-a82d-c7627ade05a3-kube-api-access-5mblb\") pod \"neutron-operator-controller-manager-585dbc889-pwdks\" (UID: \"4f112d60-8db7-4ec2-a82d-c7627ade05a3\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.699862 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64lls\" (UniqueName: \"kubernetes.io/projected/7f20ca5f-d244-45be-864d-3b8ad3d456ea-kube-api-access-64lls\") pod \"manila-operator-controller-manager-7dd968899f-qrkwl\" (UID: \"7f20ca5f-d244-45be-864d-3b8ad3d456ea\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.704142 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.705709 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.708727 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-qfx9f" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.718354 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.723612 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.724567 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.734435 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.751950 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.753009 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.753728 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.753818 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.754402 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.772574 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.772649 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.773549 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.773639 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.799108 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-ftqqr"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.800346 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.814497 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-ftqqr"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.935086 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-lzl2q" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.936973 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-mhjhl" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.937191 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-g7t5t" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.937481 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-fbpzm" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.938122 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-q6hht" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.938561 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.938563 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64lls\" (UniqueName: \"kubernetes.io/projected/7f20ca5f-d244-45be-864d-3b8ad3d456ea-kube-api-access-64lls\") pod \"manila-operator-controller-manager-7dd968899f-qrkwl\" (UID: \"7f20ca5f-d244-45be-864d-3b8ad3d456ea\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.941989 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47896\" (UniqueName: \"kubernetes.io/projected/42f76062-3a9d-45c1-b928-d9ca236ec8ab-kube-api-access-47896\") pod \"mariadb-operator-controller-manager-67bf948998-5zbbw\" (UID: \"42f76062-3a9d-45c1-b928-d9ca236ec8ab\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.944121 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvdn5\" (UniqueName: \"kubernetes.io/projected/e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58-kube-api-access-rvdn5\") pod \"telemetry-operator-controller-manager-64b5b76f97-ck5g7\" (UID: \"e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.944746 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs\" (UID: \"76bde002-75f6-4c4a-af3d-16aec5a221f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.944792 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvgrh\" (UniqueName: \"kubernetes.io/projected/3e47047f-9303-47e2-8312-c83315e1a3ff-kube-api-access-pvgrh\") pod \"ovn-operator-controller-manager-788c46999f-g8qz8\" (UID: \"3e47047f-9303-47e2-8312-c83315e1a3ff\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.944876 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mblb\" (UniqueName: \"kubernetes.io/projected/4f112d60-8db7-4ec2-a82d-c7627ade05a3-kube-api-access-5mblb\") pod \"neutron-operator-controller-manager-585dbc889-pwdks\" (UID: \"4f112d60-8db7-4ec2-a82d-c7627ade05a3\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.944931 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9djc\" (UniqueName: \"kubernetes.io/projected/84af1f21-c29e-4846-9ce1-ea345cbad4fc-kube-api-access-l9djc\") pod \"swift-operator-controller-manager-68fc8c869-mrvfq\" (UID: \"84af1f21-c29e-4846-9ce1-ea345cbad4fc\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.944979 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26tml\" (UniqueName: \"kubernetes.io/projected/21f46dec-fb01-4293-ad08-706eb63a8738-kube-api-access-26tml\") pod \"nova-operator-controller-manager-55bff696bd-t47jc\" (UID: \"21f46dec-fb01-4293-ad08-706eb63a8738\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.945029 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djcfh\" (UniqueName: \"kubernetes.io/projected/76bde002-75f6-4c4a-af3d-16aec5a221f4-kube-api-access-djcfh\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs\" (UID: \"76bde002-75f6-4c4a-af3d-16aec5a221f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.945080 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znfrh\" (UniqueName: \"kubernetes.io/projected/27ab6ab7-e411-466c-bc4a-97d1660c547e-kube-api-access-znfrh\") pod \"octavia-operator-controller-manager-6687f8d877-5lzr6\" (UID: \"27ab6ab7-e411-466c-bc4a-97d1660c547e\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.974538 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znfrh\" (UniqueName: \"kubernetes.io/projected/27ab6ab7-e411-466c-bc4a-97d1660c547e-kube-api-access-znfrh\") pod \"octavia-operator-controller-manager-6687f8d877-5lzr6\" (UID: \"27ab6ab7-e411-466c-bc4a-97d1660c547e\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.984432 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26tml\" (UniqueName: \"kubernetes.io/projected/21f46dec-fb01-4293-ad08-706eb63a8738-kube-api-access-26tml\") pod \"nova-operator-controller-manager-55bff696bd-t47jc\" (UID: \"21f46dec-fb01-4293-ad08-706eb63a8738\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.984499 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mblb\" (UniqueName: \"kubernetes.io/projected/4f112d60-8db7-4ec2-a82d-c7627ade05a3-kube-api-access-5mblb\") pod \"neutron-operator-controller-manager-585dbc889-pwdks\" (UID: \"4f112d60-8db7-4ec2-a82d-c7627ade05a3\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.072721 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc"] Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.073476 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.077174 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-frpdt" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.077596 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.077700 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.089567 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc"] Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.104488 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvdn5\" (UniqueName: \"kubernetes.io/projected/e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58-kube-api-access-rvdn5\") pod \"telemetry-operator-controller-manager-64b5b76f97-ck5g7\" (UID: \"e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.104554 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs\" (UID: \"76bde002-75f6-4c4a-af3d-16aec5a221f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.104586 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gl62\" (UniqueName: \"kubernetes.io/projected/a62d6669-692b-4909-b192-4348ac82a50d-kube-api-access-5gl62\") pod \"test-operator-controller-manager-56f8bfcd9f-pgwx2\" (UID: \"a62d6669-692b-4909-b192-4348ac82a50d\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.104626 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvgrh\" (UniqueName: \"kubernetes.io/projected/3e47047f-9303-47e2-8312-c83315e1a3ff-kube-api-access-pvgrh\") pod \"ovn-operator-controller-manager-788c46999f-g8qz8\" (UID: \"3e47047f-9303-47e2-8312-c83315e1a3ff\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.104684 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert\") pod \"infra-operator-controller-manager-79955696d6-vlmtm\" (UID: \"5fafda3f-e0cd-4477-9c10-442af83a835b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.104725 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bldlv\" (UniqueName: \"kubernetes.io/projected/37a4f3fa-bbaf-433d-9835-6ac576351651-kube-api-access-bldlv\") pod \"watcher-operator-controller-manager-564965969-ftqqr\" (UID: \"37a4f3fa-bbaf-433d-9835-6ac576351651\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.104758 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9djc\" (UniqueName: \"kubernetes.io/projected/84af1f21-c29e-4846-9ce1-ea345cbad4fc-kube-api-access-l9djc\") pod \"swift-operator-controller-manager-68fc8c869-mrvfq\" (UID: \"84af1f21-c29e-4846-9ce1-ea345cbad4fc\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.104819 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6j7f\" (UniqueName: \"kubernetes.io/projected/8251c193-3c53-4651-87da-8b216cf907aa-kube-api-access-r6j7f\") pod \"placement-operator-controller-manager-5b964cf4cd-d99mj\" (UID: \"8251c193-3c53-4651-87da-8b216cf907aa\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.104849 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djcfh\" (UniqueName: \"kubernetes.io/projected/76bde002-75f6-4c4a-af3d-16aec5a221f4-kube-api-access-djcfh\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs\" (UID: \"76bde002-75f6-4c4a-af3d-16aec5a221f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:20:58 crc kubenswrapper[5010]: E0203 10:20:58.105817 5010 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 10:20:58 crc kubenswrapper[5010]: E0203 10:20:58.105872 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert podName:76bde002-75f6-4c4a-af3d-16aec5a221f4 nodeName:}" failed. No retries permitted until 2026-02-03 10:20:58.605855126 +0000 UTC m=+1128.761831255 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" (UID: "76bde002-75f6-4c4a-af3d-16aec5a221f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 10:20:58 crc kubenswrapper[5010]: E0203 10:20:58.105878 5010 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 10:20:58 crc kubenswrapper[5010]: E0203 10:20:58.105952 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert podName:5fafda3f-e0cd-4477-9c10-442af83a835b nodeName:}" failed. No retries permitted until 2026-02-03 10:20:59.105928908 +0000 UTC m=+1129.261905067 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert") pod "infra-operator-controller-manager-79955696d6-vlmtm" (UID: "5fafda3f-e0cd-4477-9c10-442af83a835b") : secret "infra-operator-webhook-server-cert" not found Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.416181 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.417515 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.419039 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.420364 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gl62\" (UniqueName: \"kubernetes.io/projected/a62d6669-692b-4909-b192-4348ac82a50d-kube-api-access-5gl62\") pod \"test-operator-controller-manager-56f8bfcd9f-pgwx2\" (UID: \"a62d6669-692b-4909-b192-4348ac82a50d\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.420451 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.420506 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.420652 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bldlv\" (UniqueName: \"kubernetes.io/projected/37a4f3fa-bbaf-433d-9835-6ac576351651-kube-api-access-bldlv\") pod \"watcher-operator-controller-manager-564965969-ftqqr\" (UID: \"37a4f3fa-bbaf-433d-9835-6ac576351651\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.420787 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dv2g\" (UniqueName: \"kubernetes.io/projected/54aaeb1d-8a23-413f-b1f4-5115b167d78b-kube-api-access-7dv2g\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.420851 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6j7f\" (UniqueName: \"kubernetes.io/projected/8251c193-3c53-4651-87da-8b216cf907aa-kube-api-access-r6j7f\") pod \"placement-operator-controller-manager-5b964cf4cd-d99mj\" (UID: \"8251c193-3c53-4651-87da-8b216cf907aa\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.421880 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.422944 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.445639 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvgrh\" (UniqueName: \"kubernetes.io/projected/3e47047f-9303-47e2-8312-c83315e1a3ff-kube-api-access-pvgrh\") pod \"ovn-operator-controller-manager-788c46999f-g8qz8\" (UID: \"3e47047f-9303-47e2-8312-c83315e1a3ff\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.453307 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.458708 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djcfh\" (UniqueName: \"kubernetes.io/projected/76bde002-75f6-4c4a-af3d-16aec5a221f4-kube-api-access-djcfh\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs\" (UID: \"76bde002-75f6-4c4a-af3d-16aec5a221f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.489229 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvdn5\" (UniqueName: \"kubernetes.io/projected/e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58-kube-api-access-rvdn5\") pod \"telemetry-operator-controller-manager-64b5b76f97-ck5g7\" (UID: \"e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.677438 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9djc\" (UniqueName: \"kubernetes.io/projected/84af1f21-c29e-4846-9ce1-ea345cbad4fc-kube-api-access-l9djc\") pod \"swift-operator-controller-manager-68fc8c869-mrvfq\" (UID: \"84af1f21-c29e-4846-9ce1-ea345cbad4fc\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.689587 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dv2g\" (UniqueName: \"kubernetes.io/projected/54aaeb1d-8a23-413f-b1f4-5115b167d78b-kube-api-access-7dv2g\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.690910 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.690964 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs\" (UID: \"76bde002-75f6-4c4a-af3d-16aec5a221f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.691023 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:20:58 crc kubenswrapper[5010]: E0203 10:20:58.697815 5010 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 10:20:58 crc kubenswrapper[5010]: E0203 10:20:58.697915 5010 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 10:20:58 crc kubenswrapper[5010]: E0203 10:20:58.697961 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert podName:76bde002-75f6-4c4a-af3d-16aec5a221f4 nodeName:}" failed. No retries permitted until 2026-02-03 10:20:59.697945771 +0000 UTC m=+1129.853921900 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" (UID: "76bde002-75f6-4c4a-af3d-16aec5a221f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 10:20:58 crc kubenswrapper[5010]: E0203 10:20:58.698090 5010 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 10:20:58 crc kubenswrapper[5010]: E0203 10:20:58.698140 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs podName:54aaeb1d-8a23-413f-b1f4-5115b167d78b nodeName:}" failed. No retries permitted until 2026-02-03 10:20:59.198114215 +0000 UTC m=+1129.354090344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs") pod "openstack-operator-controller-manager-844f879456-5ktjc" (UID: "54aaeb1d-8a23-413f-b1f4-5115b167d78b") : secret "metrics-server-cert" not found Feb 03 10:20:58 crc kubenswrapper[5010]: E0203 10:20:58.703163 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs podName:54aaeb1d-8a23-413f-b1f4-5115b167d78b nodeName:}" failed. No retries permitted until 2026-02-03 10:20:59.203140424 +0000 UTC m=+1129.359116553 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs") pod "openstack-operator-controller-manager-844f879456-5ktjc" (UID: "54aaeb1d-8a23-413f-b1f4-5115b167d78b") : secret "webhook-server-cert" not found Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.726456 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6j7f\" (UniqueName: \"kubernetes.io/projected/8251c193-3c53-4651-87da-8b216cf907aa-kube-api-access-r6j7f\") pod \"placement-operator-controller-manager-5b964cf4cd-d99mj\" (UID: \"8251c193-3c53-4651-87da-8b216cf907aa\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.733176 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bldlv\" (UniqueName: \"kubernetes.io/projected/37a4f3fa-bbaf-433d-9835-6ac576351651-kube-api-access-bldlv\") pod \"watcher-operator-controller-manager-564965969-ftqqr\" (UID: \"37a4f3fa-bbaf-433d-9835-6ac576351651\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.741733 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gl62\" (UniqueName: \"kubernetes.io/projected/a62d6669-692b-4909-b192-4348ac82a50d-kube-api-access-5gl62\") pod \"test-operator-controller-manager-56f8bfcd9f-pgwx2\" (UID: \"a62d6669-692b-4909-b192-4348ac82a50d\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.744488 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dv2g\" (UniqueName: \"kubernetes.io/projected/54aaeb1d-8a23-413f-b1f4-5115b167d78b-kube-api-access-7dv2g\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.744869 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.903008 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj" Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.108930 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert\") pod \"infra-operator-controller-manager-79955696d6-vlmtm\" (UID: \"5fafda3f-e0cd-4477-9c10-442af83a835b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.113589 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj"] Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.114358 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj"] Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.114449 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj" Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.116472 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-jlf56" Feb 03 10:20:59 crc kubenswrapper[5010]: E0203 10:20:59.120321 5010 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 10:20:59 crc kubenswrapper[5010]: E0203 10:20:59.120395 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert podName:5fafda3f-e0cd-4477-9c10-442af83a835b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:01.120377122 +0000 UTC m=+1131.276353251 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert") pod "infra-operator-controller-manager-79955696d6-vlmtm" (UID: "5fafda3f-e0cd-4477-9c10-442af83a835b") : secret "infra-operator-webhook-server-cert" not found Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.143571 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.160113 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2" Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.173050 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.210169 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2zwx\" (UniqueName: \"kubernetes.io/projected/2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a-kube-api-access-c2zwx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-kj7mj\" (UID: \"2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj" Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.210304 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.210337 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:20:59 crc kubenswrapper[5010]: E0203 10:20:59.210439 5010 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 10:20:59 crc kubenswrapper[5010]: E0203 10:20:59.210503 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs podName:54aaeb1d-8a23-413f-b1f4-5115b167d78b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:00.210483114 +0000 UTC m=+1130.366459243 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs") pod "openstack-operator-controller-manager-844f879456-5ktjc" (UID: "54aaeb1d-8a23-413f-b1f4-5115b167d78b") : secret "metrics-server-cert" not found Feb 03 10:20:59 crc kubenswrapper[5010]: E0203 10:20:59.211895 5010 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 10:20:59 crc kubenswrapper[5010]: E0203 10:20:59.211930 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs podName:54aaeb1d-8a23-413f-b1f4-5115b167d78b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:00.211920151 +0000 UTC m=+1130.367896280 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs") pod "openstack-operator-controller-manager-844f879456-5ktjc" (UID: "54aaeb1d-8a23-413f-b1f4-5115b167d78b") : secret "webhook-server-cert" not found Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.346746 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2zwx\" (UniqueName: \"kubernetes.io/projected/2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a-kube-api-access-c2zwx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-kj7mj\" (UID: \"2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj" Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.498588 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2zwx\" (UniqueName: \"kubernetes.io/projected/2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a-kube-api-access-c2zwx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-kj7mj\" (UID: \"2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj" Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.577206 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj" Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.700019 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs\" (UID: \"76bde002-75f6-4c4a-af3d-16aec5a221f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:20:59 crc kubenswrapper[5010]: E0203 10:20:59.701431 5010 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 10:20:59 crc kubenswrapper[5010]: E0203 10:20:59.701473 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert podName:76bde002-75f6-4c4a-af3d-16aec5a221f4 nodeName:}" failed. No retries permitted until 2026-02-03 10:21:01.701459194 +0000 UTC m=+1131.857435313 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" (UID: "76bde002-75f6-4c4a-af3d-16aec5a221f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.724788 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp"] Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.736040 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72"] Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.767507 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56"] Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.774313 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q"] Feb 03 10:20:59 crc kubenswrapper[5010]: W0203 10:20:59.794328 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a136ea1_ab68_4f60_8fb2_969363f25337.slice/crio-247073d823e29079b70a880eb5a01130a2597ed24f667e8b834f53d6af4afd90 WatchSource:0}: Error finding container 247073d823e29079b70a880eb5a01130a2597ed24f667e8b834f53d6af4afd90: Status 404 returned error can't find the container with id 247073d823e29079b70a880eb5a01130a2597ed24f667e8b834f53d6af4afd90 Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.965652 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72" event={"ID":"a7d72ea1-7126-4768-9cf8-f590ebd216d7","Type":"ContainerStarted","Data":"777584da4ae303e1bee67558c39b19de945ee8851e1de7f3cddcbb09a5faf862"} Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.967262 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp" event={"ID":"1a136ea1-ab68-4f60-8fb2-969363f25337","Type":"ContainerStarted","Data":"247073d823e29079b70a880eb5a01130a2597ed24f667e8b834f53d6af4afd90"} Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.085209 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz"] Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.218049 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.218102 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:00 crc kubenswrapper[5010]: E0203 10:21:00.218248 5010 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 10:21:00 crc kubenswrapper[5010]: E0203 10:21:00.218323 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs podName:54aaeb1d-8a23-413f-b1f4-5115b167d78b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:02.218285638 +0000 UTC m=+1132.374261777 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs") pod "openstack-operator-controller-manager-844f879456-5ktjc" (UID: "54aaeb1d-8a23-413f-b1f4-5115b167d78b") : secret "metrics-server-cert" not found Feb 03 10:21:00 crc kubenswrapper[5010]: E0203 10:21:00.218378 5010 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 10:21:00 crc kubenswrapper[5010]: E0203 10:21:00.218406 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs podName:54aaeb1d-8a23-413f-b1f4-5115b167d78b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:02.218397391 +0000 UTC m=+1132.374373520 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs") pod "openstack-operator-controller-manager-844f879456-5ktjc" (UID: "54aaeb1d-8a23-413f-b1f4-5115b167d78b") : secret "webhook-server-cert" not found Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.327438 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws"] Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.345683 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc"] Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.361539 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs"] Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.383843 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8"] Feb 03 10:21:00 crc kubenswrapper[5010]: W0203 10:21:00.485501 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fa8a872_8dc5_4e6d_838a_5dc54e6d4bbe.slice/crio-314cd8ffccb4bf543aaf592699c50c8d3be532bfb7978dd3fd40059992a22bba WatchSource:0}: Error finding container 314cd8ffccb4bf543aaf592699c50c8d3be532bfb7978dd3fd40059992a22bba: Status 404 returned error can't find the container with id 314cd8ffccb4bf543aaf592699c50c8d3be532bfb7978dd3fd40059992a22bba Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.779390 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw"] Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.809338 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj"] Feb 03 10:21:00 crc kubenswrapper[5010]: W0203 10:21:00.816299 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8251c193_3c53_4651_87da_8b216cf907aa.slice/crio-67cb9f81a95e2b3746c143178f801cc9201360836d4b672048f54115f4fa4b2b WatchSource:0}: Error finding container 67cb9f81a95e2b3746c143178f801cc9201360836d4b672048f54115f4fa4b2b: Status 404 returned error can't find the container with id 67cb9f81a95e2b3746c143178f801cc9201360836d4b672048f54115f4fa4b2b Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.822456 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl"] Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.832738 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6"] Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.914153 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj"] Feb 03 10:21:00 crc kubenswrapper[5010]: W0203 10:21:00.916239 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cbbe9fa_4c61_41fc_9a62_41dbaea09a0a.slice/crio-80d30c684b7fac1a947146f508ed062bd5dd4c014aa41f4b7cb243691925af4a WatchSource:0}: Error finding container 80d30c684b7fac1a947146f508ed062bd5dd4c014aa41f4b7cb243691925af4a: Status 404 returned error can't find the container with id 80d30c684b7fac1a947146f508ed062bd5dd4c014aa41f4b7cb243691925af4a Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.977969 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws" event={"ID":"9fa8a872-8dc5-4e6d-838a-5dc54e6d4bbe","Type":"ContainerStarted","Data":"314cd8ffccb4bf543aaf592699c50c8d3be532bfb7978dd3fd40059992a22bba"} Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.979323 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8" event={"ID":"3e47047f-9303-47e2-8312-c83315e1a3ff","Type":"ContainerStarted","Data":"c63a0db7216ee41563ab86de9ee998a54ea1ea70afdfd4a16c1ba6f2203f310b"} Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.980544 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz" event={"ID":"2f204595-5d98-4c16-b5d1-5004c6cae836","Type":"ContainerStarted","Data":"cd9efe4f3ce1880d64f7b8b57dc176717a78e49c4e3649fa93097dacdb67f0db"} Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.981706 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw" event={"ID":"42f76062-3a9d-45c1-b928-d9ca236ec8ab","Type":"ContainerStarted","Data":"ce9cbe44e818ebc74946896e08243f13a574c52ebf60de90e4365e4039c1c903"} Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.982968 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6" event={"ID":"27ab6ab7-e411-466c-bc4a-97d1660c547e","Type":"ContainerStarted","Data":"a23413ee3b29f499b89e8dc8330a4c6e2c4f840dd46371abd5e70fbcf792193f"} Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.984496 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj" event={"ID":"8251c193-3c53-4651-87da-8b216cf907aa","Type":"ContainerStarted","Data":"67cb9f81a95e2b3746c143178f801cc9201360836d4b672048f54115f4fa4b2b"} Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.985876 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q" event={"ID":"9dc494bd-d6ef-4a22-8312-67750ebb3dbe","Type":"ContainerStarted","Data":"6c8f1e6b9f75d5b192f66093034e4d6f58a99c74a14523fa14500727eb106374"} Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.987040 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj" event={"ID":"2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a","Type":"ContainerStarted","Data":"80d30c684b7fac1a947146f508ed062bd5dd4c014aa41f4b7cb243691925af4a"} Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.988280 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56" event={"ID":"74803e29-48a3-4667-bcdb-a94f381545b5","Type":"ContainerStarted","Data":"6e01976389e1fb3fa323370b3dc0da56c38b304756117a6b78876dd18b07a733"} Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.990243 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl" event={"ID":"7f20ca5f-d244-45be-864d-3b8ad3d456ea","Type":"ContainerStarted","Data":"77a142fe2b9c3b3d5d5de4607bb1f9d5bfd2395c269d99dfa990a0721140f3b6"} Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.991413 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc" event={"ID":"fd413d86-2cda-4079-a895-5cb60928a47f","Type":"ContainerStarted","Data":"05e59ea5914ae024ac2ab3d90428654f3fee850a8ccaa9548fcd24dd465e95ed"} Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.992600 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs" event={"ID":"d33dc0fd-847b-41cc-a8ac-afde40120ba2","Type":"ContainerStarted","Data":"5e28092feea0417fec92df560efc7fdb66d64913de365dd260ca97018c70d5f3"} Feb 03 10:21:01 crc kubenswrapper[5010]: I0203 10:21:01.121689 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc"] Feb 03 10:21:01 crc kubenswrapper[5010]: I0203 10:21:01.136929 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq"] Feb 03 10:21:01 crc kubenswrapper[5010]: I0203 10:21:01.145144 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert\") pod \"infra-operator-controller-manager-79955696d6-vlmtm\" (UID: \"5fafda3f-e0cd-4477-9c10-442af83a835b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:21:01 crc kubenswrapper[5010]: E0203 10:21:01.145403 5010 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 10:21:01 crc kubenswrapper[5010]: E0203 10:21:01.145508 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert podName:5fafda3f-e0cd-4477-9c10-442af83a835b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:05.145484823 +0000 UTC m=+1135.301460992 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert") pod "infra-operator-controller-manager-79955696d6-vlmtm" (UID: "5fafda3f-e0cd-4477-9c10-442af83a835b") : secret "infra-operator-webhook-server-cert" not found Feb 03 10:21:01 crc kubenswrapper[5010]: W0203 10:21:01.146811 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode51fff09_23b1_4bf0_b4e2_eeb2e6ee3c58.slice/crio-4bfbf9d4f63c9391c4d4f857c540da0940559e2d8d4e353bcb1e788f1790431a WatchSource:0}: Error finding container 4bfbf9d4f63c9391c4d4f857c540da0940559e2d8d4e353bcb1e788f1790431a: Status 404 returned error can't find the container with id 4bfbf9d4f63c9391c4d4f857c540da0940559e2d8d4e353bcb1e788f1790431a Feb 03 10:21:01 crc kubenswrapper[5010]: I0203 10:21:01.146712 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-ftqqr"] Feb 03 10:21:01 crc kubenswrapper[5010]: I0203 10:21:01.155652 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2"] Feb 03 10:21:01 crc kubenswrapper[5010]: I0203 10:21:01.162972 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7"] Feb 03 10:21:01 crc kubenswrapper[5010]: W0203 10:21:01.167131 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda62d6669_692b_4909_b192_4348ac82a50d.slice/crio-f64984c38128739c7391db832b7bed14b6b51b869203734056195d6793167d0d WatchSource:0}: Error finding container f64984c38128739c7391db832b7bed14b6b51b869203734056195d6793167d0d: Status 404 returned error can't find the container with id f64984c38128739c7391db832b7bed14b6b51b869203734056195d6793167d0d Feb 03 10:21:01 crc kubenswrapper[5010]: I0203 10:21:01.168736 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks"] Feb 03 10:21:01 crc kubenswrapper[5010]: W0203 10:21:01.172440 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21f46dec_fb01_4293_ad08_706eb63a8738.slice/crio-3f0754bd9e5babedd813570f881635452f0be75353955fbc0465a5388b23dadf WatchSource:0}: Error finding container 3f0754bd9e5babedd813570f881635452f0be75353955fbc0465a5388b23dadf: Status 404 returned error can't find the container with id 3f0754bd9e5babedd813570f881635452f0be75353955fbc0465a5388b23dadf Feb 03 10:21:01 crc kubenswrapper[5010]: W0203 10:21:01.172750 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f112d60_8db7_4ec2_a82d_c7627ade05a3.slice/crio-4014d8defd7cc40c72c8e5af76f2fbb11a6ee18d3ff5ad690d739d6472bf6f2e WatchSource:0}: Error finding container 4014d8defd7cc40c72c8e5af76f2fbb11a6ee18d3ff5ad690d739d6472bf6f2e: Status 404 returned error can't find the container with id 4014d8defd7cc40c72c8e5af76f2fbb11a6ee18d3ff5ad690d739d6472bf6f2e Feb 03 10:21:01 crc kubenswrapper[5010]: W0203 10:21:01.176661 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a4f3fa_bbaf_433d_9835_6ac576351651.slice/crio-67516141402a2760dc386c4329f04fc6c235f2d8d95e6610c9fd6c1c1d3ab909 WatchSource:0}: Error finding container 67516141402a2760dc386c4329f04fc6c235f2d8d95e6610c9fd6c1c1d3ab909: Status 404 returned error can't find the container with id 67516141402a2760dc386c4329f04fc6c235f2d8d95e6610c9fd6c1c1d3ab909 Feb 03 10:21:01 crc kubenswrapper[5010]: E0203 10:21:01.178785 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-26tml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-t47jc_openstack-operators(21f46dec-fb01-4293-ad08-706eb63a8738): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 10:21:01 crc kubenswrapper[5010]: E0203 10:21:01.178929 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5mblb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-pwdks_openstack-operators(4f112d60-8db7-4ec2-a82d-c7627ade05a3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 10:21:01 crc kubenswrapper[5010]: E0203 10:21:01.180038 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" podUID="21f46dec-fb01-4293-ad08-706eb63a8738" Feb 03 10:21:01 crc kubenswrapper[5010]: E0203 10:21:01.180084 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" podUID="4f112d60-8db7-4ec2-a82d-c7627ade05a3" Feb 03 10:21:01 crc kubenswrapper[5010]: W0203 10:21:01.180906 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84af1f21_c29e_4846_9ce1_ea345cbad4fc.slice/crio-06c82c051d3d741f31db1af40db286dd7c40aab5dfa765b63713941b6bf104ac WatchSource:0}: Error finding container 06c82c051d3d741f31db1af40db286dd7c40aab5dfa765b63713941b6bf104ac: Status 404 returned error can't find the container with id 06c82c051d3d741f31db1af40db286dd7c40aab5dfa765b63713941b6bf104ac Feb 03 10:21:01 crc kubenswrapper[5010]: E0203 10:21:01.181352 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bldlv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-ftqqr_openstack-operators(37a4f3fa-bbaf-433d-9835-6ac576351651): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 10:21:01 crc kubenswrapper[5010]: E0203 10:21:01.182797 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" podUID="37a4f3fa-bbaf-433d-9835-6ac576351651" Feb 03 10:21:01 crc kubenswrapper[5010]: E0203 10:21:01.185804 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l9djc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-mrvfq_openstack-operators(84af1f21-c29e-4846-9ce1-ea345cbad4fc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 10:21:01 crc kubenswrapper[5010]: E0203 10:21:01.186973 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" podUID="84af1f21-c29e-4846-9ce1-ea345cbad4fc" Feb 03 10:21:01 crc kubenswrapper[5010]: I0203 10:21:01.771888 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs\" (UID: \"76bde002-75f6-4c4a-af3d-16aec5a221f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:21:01 crc kubenswrapper[5010]: E0203 10:21:01.772047 5010 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 10:21:01 crc kubenswrapper[5010]: E0203 10:21:01.772088 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert podName:76bde002-75f6-4c4a-af3d-16aec5a221f4 nodeName:}" failed. No retries permitted until 2026-02-03 10:21:05.772075492 +0000 UTC m=+1135.928051621 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" (UID: "76bde002-75f6-4c4a-af3d-16aec5a221f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 10:21:02 crc kubenswrapper[5010]: I0203 10:21:02.027331 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" event={"ID":"4f112d60-8db7-4ec2-a82d-c7627ade05a3","Type":"ContainerStarted","Data":"4014d8defd7cc40c72c8e5af76f2fbb11a6ee18d3ff5ad690d739d6472bf6f2e"} Feb 03 10:21:02 crc kubenswrapper[5010]: E0203 10:21:02.029640 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" podUID="4f112d60-8db7-4ec2-a82d-c7627ade05a3" Feb 03 10:21:02 crc kubenswrapper[5010]: I0203 10:21:02.031784 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" event={"ID":"84af1f21-c29e-4846-9ce1-ea345cbad4fc","Type":"ContainerStarted","Data":"06c82c051d3d741f31db1af40db286dd7c40aab5dfa765b63713941b6bf104ac"} Feb 03 10:21:02 crc kubenswrapper[5010]: I0203 10:21:02.035074 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" event={"ID":"21f46dec-fb01-4293-ad08-706eb63a8738","Type":"ContainerStarted","Data":"3f0754bd9e5babedd813570f881635452f0be75353955fbc0465a5388b23dadf"} Feb 03 10:21:02 crc kubenswrapper[5010]: E0203 10:21:02.035335 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" podUID="84af1f21-c29e-4846-9ce1-ea345cbad4fc" Feb 03 10:21:02 crc kubenswrapper[5010]: E0203 10:21:02.038079 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" podUID="21f46dec-fb01-4293-ad08-706eb63a8738" Feb 03 10:21:02 crc kubenswrapper[5010]: I0203 10:21:02.048556 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" event={"ID":"37a4f3fa-bbaf-433d-9835-6ac576351651","Type":"ContainerStarted","Data":"67516141402a2760dc386c4329f04fc6c235f2d8d95e6610c9fd6c1c1d3ab909"} Feb 03 10:21:02 crc kubenswrapper[5010]: E0203 10:21:02.052730 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" podUID="37a4f3fa-bbaf-433d-9835-6ac576351651" Feb 03 10:21:02 crc kubenswrapper[5010]: I0203 10:21:02.053515 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7" event={"ID":"e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58","Type":"ContainerStarted","Data":"4bfbf9d4f63c9391c4d4f857c540da0940559e2d8d4e353bcb1e788f1790431a"} Feb 03 10:21:02 crc kubenswrapper[5010]: I0203 10:21:02.066202 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2" event={"ID":"a62d6669-692b-4909-b192-4348ac82a50d","Type":"ContainerStarted","Data":"f64984c38128739c7391db832b7bed14b6b51b869203734056195d6793167d0d"} Feb 03 10:21:02 crc kubenswrapper[5010]: I0203 10:21:02.280241 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:02 crc kubenswrapper[5010]: I0203 10:21:02.280303 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:02 crc kubenswrapper[5010]: E0203 10:21:02.280443 5010 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 10:21:02 crc kubenswrapper[5010]: E0203 10:21:02.280513 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs podName:54aaeb1d-8a23-413f-b1f4-5115b167d78b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:06.28049522 +0000 UTC m=+1136.436471349 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs") pod "openstack-operator-controller-manager-844f879456-5ktjc" (UID: "54aaeb1d-8a23-413f-b1f4-5115b167d78b") : secret "metrics-server-cert" not found Feb 03 10:21:02 crc kubenswrapper[5010]: E0203 10:21:02.280535 5010 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 10:21:02 crc kubenswrapper[5010]: E0203 10:21:02.280645 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs podName:54aaeb1d-8a23-413f-b1f4-5115b167d78b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:06.280618683 +0000 UTC m=+1136.436594842 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs") pod "openstack-operator-controller-manager-844f879456-5ktjc" (UID: "54aaeb1d-8a23-413f-b1f4-5115b167d78b") : secret "webhook-server-cert" not found Feb 03 10:21:03 crc kubenswrapper[5010]: E0203 10:21:03.215123 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" podUID="37a4f3fa-bbaf-433d-9835-6ac576351651" Feb 03 10:21:03 crc kubenswrapper[5010]: E0203 10:21:03.216463 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" podUID="4f112d60-8db7-4ec2-a82d-c7627ade05a3" Feb 03 10:21:03 crc kubenswrapper[5010]: E0203 10:21:03.218031 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" podUID="84af1f21-c29e-4846-9ce1-ea345cbad4fc" Feb 03 10:21:03 crc kubenswrapper[5010]: E0203 10:21:03.222591 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" podUID="21f46dec-fb01-4293-ad08-706eb63a8738" Feb 03 10:21:05 crc kubenswrapper[5010]: I0203 10:21:05.227424 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert\") pod \"infra-operator-controller-manager-79955696d6-vlmtm\" (UID: \"5fafda3f-e0cd-4477-9c10-442af83a835b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:21:05 crc kubenswrapper[5010]: E0203 10:21:05.228180 5010 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 10:21:05 crc kubenswrapper[5010]: E0203 10:21:05.228328 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert podName:5fafda3f-e0cd-4477-9c10-442af83a835b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:13.22830873 +0000 UTC m=+1143.384284859 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert") pod "infra-operator-controller-manager-79955696d6-vlmtm" (UID: "5fafda3f-e0cd-4477-9c10-442af83a835b") : secret "infra-operator-webhook-server-cert" not found Feb 03 10:21:05 crc kubenswrapper[5010]: I0203 10:21:05.860574 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs\" (UID: \"76bde002-75f6-4c4a-af3d-16aec5a221f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:21:05 crc kubenswrapper[5010]: E0203 10:21:05.860752 5010 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 10:21:05 crc kubenswrapper[5010]: E0203 10:21:05.863864 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert podName:76bde002-75f6-4c4a-af3d-16aec5a221f4 nodeName:}" failed. No retries permitted until 2026-02-03 10:21:13.86384273 +0000 UTC m=+1144.019818859 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" (UID: "76bde002-75f6-4c4a-af3d-16aec5a221f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 10:21:06 crc kubenswrapper[5010]: I0203 10:21:06.380160 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:06 crc kubenswrapper[5010]: I0203 10:21:06.380232 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:06 crc kubenswrapper[5010]: E0203 10:21:06.380350 5010 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 10:21:06 crc kubenswrapper[5010]: E0203 10:21:06.380424 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs podName:54aaeb1d-8a23-413f-b1f4-5115b167d78b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:14.380406947 +0000 UTC m=+1144.536383076 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs") pod "openstack-operator-controller-manager-844f879456-5ktjc" (UID: "54aaeb1d-8a23-413f-b1f4-5115b167d78b") : secret "webhook-server-cert" not found Feb 03 10:21:06 crc kubenswrapper[5010]: E0203 10:21:06.380442 5010 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 10:21:06 crc kubenswrapper[5010]: E0203 10:21:06.380568 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs podName:54aaeb1d-8a23-413f-b1f4-5115b167d78b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:14.380545361 +0000 UTC m=+1144.536521580 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs") pod "openstack-operator-controller-manager-844f879456-5ktjc" (UID: "54aaeb1d-8a23-413f-b1f4-5115b167d78b") : secret "metrics-server-cert" not found Feb 03 10:21:13 crc kubenswrapper[5010]: I0203 10:21:13.248893 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert\") pod \"infra-operator-controller-manager-79955696d6-vlmtm\" (UID: \"5fafda3f-e0cd-4477-9c10-442af83a835b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:21:13 crc kubenswrapper[5010]: E0203 10:21:13.249112 5010 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 10:21:13 crc kubenswrapper[5010]: E0203 10:21:13.249560 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert podName:5fafda3f-e0cd-4477-9c10-442af83a835b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:29.249544531 +0000 UTC m=+1159.405520660 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert") pod "infra-operator-controller-manager-79955696d6-vlmtm" (UID: "5fafda3f-e0cd-4477-9c10-442af83a835b") : secret "infra-operator-webhook-server-cert" not found Feb 03 10:21:13 crc kubenswrapper[5010]: I0203 10:21:13.961435 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs\" (UID: \"76bde002-75f6-4c4a-af3d-16aec5a221f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:21:13 crc kubenswrapper[5010]: E0203 10:21:13.961806 5010 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 10:21:13 crc kubenswrapper[5010]: E0203 10:21:13.962004 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert podName:76bde002-75f6-4c4a-af3d-16aec5a221f4 nodeName:}" failed. No retries permitted until 2026-02-03 10:21:29.961952394 +0000 UTC m=+1160.117928563 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" (UID: "76bde002-75f6-4c4a-af3d-16aec5a221f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 10:21:14 crc kubenswrapper[5010]: E0203 10:21:14.094935 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521" Feb 03 10:21:14 crc kubenswrapper[5010]: E0203 10:21:14.095148 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4vghr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5f4b8bd54d-w7ldz_openstack-operators(2f204595-5d98-4c16-b5d1-5004c6cae836): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:21:14 crc kubenswrapper[5010]: E0203 10:21:14.097813 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz" podUID="2f204595-5d98-4c16-b5d1-5004c6cae836" Feb 03 10:21:14 crc kubenswrapper[5010]: E0203 10:21:14.381155 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz" podUID="2f204595-5d98-4c16-b5d1-5004c6cae836" Feb 03 10:21:14 crc kubenswrapper[5010]: I0203 10:21:14.468193 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:14 crc kubenswrapper[5010]: I0203 10:21:14.468396 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:14 crc kubenswrapper[5010]: E0203 10:21:14.468323 5010 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 10:21:14 crc kubenswrapper[5010]: E0203 10:21:14.468478 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs podName:54aaeb1d-8a23-413f-b1f4-5115b167d78b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:30.468460592 +0000 UTC m=+1160.624436721 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs") pod "openstack-operator-controller-manager-844f879456-5ktjc" (UID: "54aaeb1d-8a23-413f-b1f4-5115b167d78b") : secret "webhook-server-cert" not found Feb 03 10:21:14 crc kubenswrapper[5010]: I0203 10:21:14.476189 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:14 crc kubenswrapper[5010]: E0203 10:21:14.823472 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488" Feb 03 10:21:14 crc kubenswrapper[5010]: E0203 10:21:14.823688 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r6j7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-d99mj_openstack-operators(8251c193-3c53-4651-87da-8b216cf907aa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:21:14 crc kubenswrapper[5010]: E0203 10:21:14.825033 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj" podUID="8251c193-3c53-4651-87da-8b216cf907aa" Feb 03 10:21:15 crc kubenswrapper[5010]: E0203 10:21:15.397246 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj" podUID="8251c193-3c53-4651-87da-8b216cf907aa" Feb 03 10:21:15 crc kubenswrapper[5010]: E0203 10:21:15.778635 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be" Feb 03 10:21:15 crc kubenswrapper[5010]: E0203 10:21:15.778837 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-znfrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-5lzr6_openstack-operators(27ab6ab7-e411-466c-bc4a-97d1660c547e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:21:15 crc kubenswrapper[5010]: E0203 10:21:15.780237 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6" podUID="27ab6ab7-e411-466c-bc4a-97d1660c547e" Feb 03 10:21:16 crc kubenswrapper[5010]: I0203 10:21:16.390198 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:21:16 crc kubenswrapper[5010]: I0203 10:21:16.390307 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:21:16 crc kubenswrapper[5010]: I0203 10:21:16.390388 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:21:16 crc kubenswrapper[5010]: I0203 10:21:16.391386 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"221f195b125299df734f26b3fd40fd966d81cfff3c339b70c815feda6a5e1f4b"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:21:16 crc kubenswrapper[5010]: I0203 10:21:16.391455 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://221f195b125299df734f26b3fd40fd966d81cfff3c339b70c815feda6a5e1f4b" gracePeriod=600 Feb 03 10:21:16 crc kubenswrapper[5010]: E0203 10:21:16.404043 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6" podUID="27ab6ab7-e411-466c-bc4a-97d1660c547e" Feb 03 10:21:16 crc kubenswrapper[5010]: E0203 10:21:16.679849 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Feb 03 10:21:16 crc kubenswrapper[5010]: E0203 10:21:16.680026 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-47896,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-5zbbw_openstack-operators(42f76062-3a9d-45c1-b928-d9ca236ec8ab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:21:16 crc kubenswrapper[5010]: E0203 10:21:16.681383 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw" podUID="42f76062-3a9d-45c1-b928-d9ca236ec8ab" Feb 03 10:21:17 crc kubenswrapper[5010]: I0203 10:21:17.410587 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="221f195b125299df734f26b3fd40fd966d81cfff3c339b70c815feda6a5e1f4b" exitCode=0 Feb 03 10:21:17 crc kubenswrapper[5010]: I0203 10:21:17.410678 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"221f195b125299df734f26b3fd40fd966d81cfff3c339b70c815feda6a5e1f4b"} Feb 03 10:21:17 crc kubenswrapper[5010]: I0203 10:21:17.410757 5010 scope.go:117] "RemoveContainer" containerID="9442102e724f69e1d556f61f5773f0e8e33b6a283cb3f40b3f679d223bc6c1e0" Feb 03 10:21:17 crc kubenswrapper[5010]: E0203 10:21:17.413322 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw" podUID="42f76062-3a9d-45c1-b928-d9ca236ec8ab" Feb 03 10:21:19 crc kubenswrapper[5010]: E0203 10:21:19.748677 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382" Feb 03 10:21:19 crc kubenswrapper[5010]: E0203 10:21:19.749453 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l6zg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d9697b7f4-j87lc_openstack-operators(fd413d86-2cda-4079-a895-5cb60928a47f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:21:19 crc kubenswrapper[5010]: E0203 10:21:19.750720 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc" podUID="fd413d86-2cda-4079-a895-5cb60928a47f" Feb 03 10:21:20 crc kubenswrapper[5010]: E0203 10:21:20.607201 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc" podUID="fd413d86-2cda-4079-a895-5cb60928a47f" Feb 03 10:21:20 crc kubenswrapper[5010]: E0203 10:21:20.787736 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4" Feb 03 10:21:20 crc kubenswrapper[5010]: E0203 10:21:20.788033 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pvgrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-g8qz8_openstack-operators(3e47047f-9303-47e2-8312-c83315e1a3ff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:21:20 crc kubenswrapper[5010]: E0203 10:21:20.789757 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8" podUID="3e47047f-9303-47e2-8312-c83315e1a3ff" Feb 03 10:21:21 crc kubenswrapper[5010]: E0203 10:21:21.610175 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8" podUID="3e47047f-9303-47e2-8312-c83315e1a3ff" Feb 03 10:21:28 crc kubenswrapper[5010]: E0203 10:21:28.870173 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a" Feb 03 10:21:28 crc kubenswrapper[5010]: E0203 10:21:28.870879 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rvdn5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-ck5g7_openstack-operators(e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:21:28 crc kubenswrapper[5010]: E0203 10:21:28.872109 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7" podUID="e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58" Feb 03 10:21:29 crc kubenswrapper[5010]: I0203 10:21:29.296030 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert\") pod \"infra-operator-controller-manager-79955696d6-vlmtm\" (UID: \"5fafda3f-e0cd-4477-9c10-442af83a835b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:21:29 crc kubenswrapper[5010]: I0203 10:21:29.302634 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert\") pod \"infra-operator-controller-manager-79955696d6-vlmtm\" (UID: \"5fafda3f-e0cd-4477-9c10-442af83a835b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:21:29 crc kubenswrapper[5010]: I0203 10:21:29.403334 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qfj78" Feb 03 10:21:29 crc kubenswrapper[5010]: I0203 10:21:29.412682 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:21:29 crc kubenswrapper[5010]: E0203 10:21:29.781740 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7" podUID="e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58" Feb 03 10:21:30 crc kubenswrapper[5010]: I0203 10:21:30.004692 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs\" (UID: \"76bde002-75f6-4c4a-af3d-16aec5a221f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:21:30 crc kubenswrapper[5010]: I0203 10:21:30.008780 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs\" (UID: \"76bde002-75f6-4c4a-af3d-16aec5a221f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:21:30 crc kubenswrapper[5010]: E0203 10:21:30.075277 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Feb 03 10:21:30 crc kubenswrapper[5010]: E0203 10:21:30.075526 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k69sw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-gb8tp_openstack-operators(1a136ea1-ab68-4f60-8fb2-969363f25337): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:21:30 crc kubenswrapper[5010]: E0203 10:21:30.077769 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp" podUID="1a136ea1-ab68-4f60-8fb2-969363f25337" Feb 03 10:21:30 crc kubenswrapper[5010]: I0203 10:21:30.106055 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-bqqr5" Feb 03 10:21:30 crc kubenswrapper[5010]: I0203 10:21:30.114972 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:21:30 crc kubenswrapper[5010]: I0203 10:21:30.513116 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:30 crc kubenswrapper[5010]: I0203 10:21:30.529729 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:30 crc kubenswrapper[5010]: I0203 10:21:30.692145 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-frpdt" Feb 03 10:21:30 crc kubenswrapper[5010]: I0203 10:21:30.700626 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:30 crc kubenswrapper[5010]: E0203 10:21:30.716967 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382" Feb 03 10:21:30 crc kubenswrapper[5010]: E0203 10:21:30.717176 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l9djc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-mrvfq_openstack-operators(84af1f21-c29e-4846-9ce1-ea345cbad4fc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:21:30 crc kubenswrapper[5010]: E0203 10:21:30.718367 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" podUID="84af1f21-c29e-4846-9ce1-ea345cbad4fc" Feb 03 10:21:30 crc kubenswrapper[5010]: E0203 10:21:30.727717 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp" podUID="1a136ea1-ab68-4f60-8fb2-969363f25337" Feb 03 10:21:31 crc kubenswrapper[5010]: E0203 10:21:31.089611 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 03 10:21:31 crc kubenswrapper[5010]: E0203 10:21:31.089782 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c2zwx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-kj7mj_openstack-operators(2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:21:31 crc kubenswrapper[5010]: E0203 10:21:31.091357 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj" podUID="2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a" Feb 03 10:21:31 crc kubenswrapper[5010]: E0203 10:21:31.725524 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj" podUID="2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a" Feb 03 10:21:33 crc kubenswrapper[5010]: E0203 10:21:33.630624 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6" Feb 03 10:21:33 crc kubenswrapper[5010]: E0203 10:21:33.631105 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5mblb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-pwdks_openstack-operators(4f112d60-8db7-4ec2-a82d-c7627ade05a3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:21:33 crc kubenswrapper[5010]: E0203 10:21:33.632337 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" podUID="4f112d60-8db7-4ec2-a82d-c7627ade05a3" Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.573925 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs"] Feb 03 10:21:34 crc kubenswrapper[5010]: W0203 10:21:34.605355 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76bde002_75f6_4c4a_af3d_16aec5a221f4.slice/crio-f4773bff07a4dfd1cfebdf7b2002157cc6730b642e68e69c1edafe73ec7917ea WatchSource:0}: Error finding container f4773bff07a4dfd1cfebdf7b2002157cc6730b642e68e69c1edafe73ec7917ea: Status 404 returned error can't find the container with id f4773bff07a4dfd1cfebdf7b2002157cc6730b642e68e69c1edafe73ec7917ea Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.933862 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2" event={"ID":"a62d6669-692b-4909-b192-4348ac82a50d","Type":"ContainerStarted","Data":"b0b3ad05967ae6837dabe42486b28a7079a2e88e24fcb5f3a59ea9f9e247288a"} Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.937601 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2" Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.941238 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs" event={"ID":"d33dc0fd-847b-41cc-a8ac-afde40120ba2","Type":"ContainerStarted","Data":"00fa265647cf1f7b9c13346c1838550c71fe45e20a23ade146b5e8d1e4e0627b"} Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.942292 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs" Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.944239 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws" event={"ID":"9fa8a872-8dc5-4e6d-838a-5dc54e6d4bbe","Type":"ContainerStarted","Data":"7a7eff23bd74867bd0a9ddd288af7e1fff4887a78a8f58023966f0ff012f268e"} Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.944900 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws" Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.947365 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz" event={"ID":"2f204595-5d98-4c16-b5d1-5004c6cae836","Type":"ContainerStarted","Data":"142936888d5fafbfc7cbebaf4db2afb9b61a022deca7cdfd0c81a0336697efd0"} Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.948549 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz" Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.950928 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6" event={"ID":"27ab6ab7-e411-466c-bc4a-97d1660c547e","Type":"ContainerStarted","Data":"477c93c95c54195257e09fda2612cee052d9e515071d3b6caca81a04a814e2f6"} Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.951546 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6" Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.952889 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" event={"ID":"76bde002-75f6-4c4a-af3d-16aec5a221f4","Type":"ContainerStarted","Data":"f4773bff07a4dfd1cfebdf7b2002157cc6730b642e68e69c1edafe73ec7917ea"} Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.954341 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q" event={"ID":"9dc494bd-d6ef-4a22-8312-67750ebb3dbe","Type":"ContainerStarted","Data":"898bd508c6b7348e0a4dff6ed01fd54493d4e41e40364a9b3af8e0e4d29f585c"} Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.955123 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q" Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.962616 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56" event={"ID":"74803e29-48a3-4667-bcdb-a94f381545b5","Type":"ContainerStarted","Data":"eacc45b0c3dcafd851d243b297203ba1375c484f7d82674035f86e3ba800be39"} Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.964386 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56" Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.965883 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72" event={"ID":"a7d72ea1-7126-4768-9cf8-f590ebd216d7","Type":"ContainerStarted","Data":"f26f3bf9b553bfabf2e4d7cc30f713eac02d83838ab543ccaf1eacf4c9fb3c56"} Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.966447 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72" Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.969105 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" event={"ID":"37a4f3fa-bbaf-433d-9835-6ac576351651","Type":"ContainerStarted","Data":"001695bd6766de9969188a95ab07b3e467ac326b310bc0115504d112185eb457"} Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.969780 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.972205 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"feb6be59c5f60eb4fb5b49379a30e3d1c2e1212fd73c563908d470b35420da88"} Feb 03 10:21:35 crc kubenswrapper[5010]: I0203 10:21:35.017330 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2" podStartSLOduration=9.129402895 podStartE2EDuration="38.017313249s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:01.169699163 +0000 UTC m=+1131.325675292" lastFinishedPulling="2026-02-03 10:21:30.057609517 +0000 UTC m=+1160.213585646" observedRunningTime="2026-02-03 10:21:35.01657577 +0000 UTC m=+1165.172551899" watchObservedRunningTime="2026-02-03 10:21:35.017313249 +0000 UTC m=+1165.173289378" Feb 03 10:21:35 crc kubenswrapper[5010]: I0203 10:21:35.081411 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6" podStartSLOduration=5.046628529 podStartE2EDuration="38.081388514s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:00.820760829 +0000 UTC m=+1130.976736968" lastFinishedPulling="2026-02-03 10:21:33.855520824 +0000 UTC m=+1164.011496953" observedRunningTime="2026-02-03 10:21:35.077696079 +0000 UTC m=+1165.233672208" watchObservedRunningTime="2026-02-03 10:21:35.081388514 +0000 UTC m=+1165.237364653" Feb 03 10:21:35 crc kubenswrapper[5010]: I0203 10:21:35.115147 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm"] Feb 03 10:21:35 crc kubenswrapper[5010]: I0203 10:21:35.127830 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56" podStartSLOduration=8.129103885 podStartE2EDuration="38.127807615s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:00.059737869 +0000 UTC m=+1130.215713998" lastFinishedPulling="2026-02-03 10:21:30.058441609 +0000 UTC m=+1160.214417728" observedRunningTime="2026-02-03 10:21:35.116670769 +0000 UTC m=+1165.272646898" watchObservedRunningTime="2026-02-03 10:21:35.127807615 +0000 UTC m=+1165.283783744" Feb 03 10:21:35 crc kubenswrapper[5010]: I0203 10:21:35.149649 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws" podStartSLOduration=7.208232112 podStartE2EDuration="38.149634555s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:00.517019374 +0000 UTC m=+1130.672995493" lastFinishedPulling="2026-02-03 10:21:31.458421807 +0000 UTC m=+1161.614397936" observedRunningTime="2026-02-03 10:21:35.147487 +0000 UTC m=+1165.303463139" watchObservedRunningTime="2026-02-03 10:21:35.149634555 +0000 UTC m=+1165.305610674" Feb 03 10:21:35 crc kubenswrapper[5010]: I0203 10:21:35.518726 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72" podStartSLOduration=6.890456467 podStartE2EDuration="38.518698826s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:20:59.830105196 +0000 UTC m=+1129.986081335" lastFinishedPulling="2026-02-03 10:21:31.458347565 +0000 UTC m=+1161.614323694" observedRunningTime="2026-02-03 10:21:35.173962439 +0000 UTC m=+1165.329938558" watchObservedRunningTime="2026-02-03 10:21:35.518698826 +0000 UTC m=+1165.674674945" Feb 03 10:21:35 crc kubenswrapper[5010]: I0203 10:21:35.595295 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc"] Feb 03 10:21:35 crc kubenswrapper[5010]: I0203 10:21:35.607141 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs" podStartSLOduration=7.625237124 podStartE2EDuration="38.607121206s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:00.476452303 +0000 UTC m=+1130.632428432" lastFinishedPulling="2026-02-03 10:21:31.458336395 +0000 UTC m=+1161.614312514" observedRunningTime="2026-02-03 10:21:35.582508224 +0000 UTC m=+1165.738484353" watchObservedRunningTime="2026-02-03 10:21:35.607121206 +0000 UTC m=+1165.763097335" Feb 03 10:21:35 crc kubenswrapper[5010]: I0203 10:21:35.624474 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" podStartSLOduration=5.965803117 podStartE2EDuration="38.62445216s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:01.181202219 +0000 UTC m=+1131.337178348" lastFinishedPulling="2026-02-03 10:21:33.839851222 +0000 UTC m=+1163.995827391" observedRunningTime="2026-02-03 10:21:35.606271214 +0000 UTC m=+1165.762247333" watchObservedRunningTime="2026-02-03 10:21:35.62445216 +0000 UTC m=+1165.780428289" Feb 03 10:21:35 crc kubenswrapper[5010]: I0203 10:21:35.641115 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q" podStartSLOduration=8.643191479 podStartE2EDuration="38.641094178s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:00.059667237 +0000 UTC m=+1130.215643366" lastFinishedPulling="2026-02-03 10:21:30.057569906 +0000 UTC m=+1160.213546065" observedRunningTime="2026-02-03 10:21:35.635708779 +0000 UTC m=+1165.791684908" watchObservedRunningTime="2026-02-03 10:21:35.641094178 +0000 UTC m=+1165.797070307" Feb 03 10:21:35 crc kubenswrapper[5010]: I0203 10:21:35.808851 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz" podStartSLOduration=5.241285995 podStartE2EDuration="38.808825852s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:00.190970877 +0000 UTC m=+1130.346947006" lastFinishedPulling="2026-02-03 10:21:33.758510724 +0000 UTC m=+1163.914486863" observedRunningTime="2026-02-03 10:21:35.802973382 +0000 UTC m=+1165.958949511" watchObservedRunningTime="2026-02-03 10:21:35.808825852 +0000 UTC m=+1165.964801981" Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.011730 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" event={"ID":"5fafda3f-e0cd-4477-9c10-442af83a835b","Type":"ContainerStarted","Data":"8ba28391a4f869facdc74d9bbf111998a780bf4d3143a6a7cdd54015d0bbd3e8"} Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.334052 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl" event={"ID":"7f20ca5f-d244-45be-864d-3b8ad3d456ea","Type":"ContainerStarted","Data":"b81bf8e6753ad7f4df6d5ed304c0dc056977b0f6c4cbc6f464d8ef6777a17d21"} Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.334453 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl" Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.359692 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc" event={"ID":"fd413d86-2cda-4079-a895-5cb60928a47f","Type":"ContainerStarted","Data":"c4738166e5c9501693a4f5252feb4adcf787d48f49829efdb460557e6325468b"} Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.360808 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc" Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.366513 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl" podStartSLOduration=10.118355435 podStartE2EDuration="39.366488443s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:00.810319181 +0000 UTC m=+1130.966295310" lastFinishedPulling="2026-02-03 10:21:30.058452189 +0000 UTC m=+1160.214428318" observedRunningTime="2026-02-03 10:21:36.3643953 +0000 UTC m=+1166.520371519" watchObservedRunningTime="2026-02-03 10:21:36.366488443 +0000 UTC m=+1166.522464582" Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.397946 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj" event={"ID":"8251c193-3c53-4651-87da-8b216cf907aa","Type":"ContainerStarted","Data":"02e6d18766df28ae2be61477770b6c60ba7708062cca853adf079caf02116663"} Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.398793 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj" Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.403968 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" event={"ID":"54aaeb1d-8a23-413f-b1f4-5115b167d78b","Type":"ContainerStarted","Data":"48e48eecc18118f5bf419511c50740bd07d8554f90e6ce9edac04b8f39285f60"} Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.407765 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw" event={"ID":"42f76062-3a9d-45c1-b928-d9ca236ec8ab","Type":"ContainerStarted","Data":"95909e18d67efe0e0f957f06e6075c59341efd7d7470d60d6dfa2aceb48ca170"} Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.408345 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw" Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.409729 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" event={"ID":"21f46dec-fb01-4293-ad08-706eb63a8738","Type":"ContainerStarted","Data":"405c144a207ec53c2b358d4018acacf1d21418a9966232e0e12c9913c6b94d36"} Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.410066 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.645171 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc" podStartSLOduration=6.242400987 podStartE2EDuration="39.645153005s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:00.476522615 +0000 UTC m=+1130.632498744" lastFinishedPulling="2026-02-03 10:21:33.879274623 +0000 UTC m=+1164.035250762" observedRunningTime="2026-02-03 10:21:36.640559977 +0000 UTC m=+1166.796536106" watchObservedRunningTime="2026-02-03 10:21:36.645153005 +0000 UTC m=+1166.801129134" Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.666907 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj" podStartSLOduration=6.62983706 podStartE2EDuration="39.666888423s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:00.822081263 +0000 UTC m=+1130.978057392" lastFinishedPulling="2026-02-03 10:21:33.859132626 +0000 UTC m=+1164.015108755" observedRunningTime="2026-02-03 10:21:36.663056894 +0000 UTC m=+1166.819033033" watchObservedRunningTime="2026-02-03 10:21:36.666888423 +0000 UTC m=+1166.822864552" Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.782936 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" podStartSLOduration=7.09945412 podStartE2EDuration="39.78290138s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:01.178620032 +0000 UTC m=+1131.334596161" lastFinishedPulling="2026-02-03 10:21:33.862067292 +0000 UTC m=+1164.018043421" observedRunningTime="2026-02-03 10:21:36.777485791 +0000 UTC m=+1166.933461930" watchObservedRunningTime="2026-02-03 10:21:36.78290138 +0000 UTC m=+1166.938877519" Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.785962 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw" podStartSLOduration=6.7354720310000005 podStartE2EDuration="39.785941848s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:00.811647886 +0000 UTC m=+1130.967624025" lastFinishedPulling="2026-02-03 10:21:33.862117693 +0000 UTC m=+1164.018093842" observedRunningTime="2026-02-03 10:21:36.684606557 +0000 UTC m=+1166.840582696" watchObservedRunningTime="2026-02-03 10:21:36.785941848 +0000 UTC m=+1166.941917987" Feb 03 10:21:37 crc kubenswrapper[5010]: I0203 10:21:37.453667 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8" event={"ID":"3e47047f-9303-47e2-8312-c83315e1a3ff","Type":"ContainerStarted","Data":"2bfd27ba413791f9894c8dae8f8c75fb06555b31d65ce43e9d97cafd9632186a"} Feb 03 10:21:37 crc kubenswrapper[5010]: I0203 10:21:37.453946 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8" Feb 03 10:21:37 crc kubenswrapper[5010]: I0203 10:21:37.458388 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" event={"ID":"54aaeb1d-8a23-413f-b1f4-5115b167d78b","Type":"ContainerStarted","Data":"804414e75040674ae44ab56bccf0047302fe31aeb07b77b3f85749d554e2f554"} Feb 03 10:21:37 crc kubenswrapper[5010]: I0203 10:21:37.801824 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" podStartSLOduration=39.801801157 podStartE2EDuration="39.801801157s" podCreationTimestamp="2026-02-03 10:20:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:21:37.788796954 +0000 UTC m=+1167.944773093" watchObservedRunningTime="2026-02-03 10:21:37.801801157 +0000 UTC m=+1167.957777296" Feb 03 10:21:37 crc kubenswrapper[5010]: I0203 10:21:37.802617 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8" podStartSLOduration=4.631867124 podStartE2EDuration="40.802609768s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:00.476451053 +0000 UTC m=+1130.632427182" lastFinishedPulling="2026-02-03 10:21:36.647193697 +0000 UTC m=+1166.803169826" observedRunningTime="2026-02-03 10:21:37.754476643 +0000 UTC m=+1167.910452772" watchObservedRunningTime="2026-02-03 10:21:37.802609768 +0000 UTC m=+1167.958585897" Feb 03 10:21:38 crc kubenswrapper[5010]: I0203 10:21:38.470010 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:39 crc kubenswrapper[5010]: I0203 10:21:39.368157 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" Feb 03 10:21:39 crc kubenswrapper[5010]: I0203 10:21:39.373653 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2" Feb 03 10:21:42 crc kubenswrapper[5010]: E0203 10:21:42.503311 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" podUID="84af1f21-c29e-4846-9ce1-ea345cbad4fc" Feb 03 10:21:43 crc kubenswrapper[5010]: I0203 10:21:43.573604 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" event={"ID":"76bde002-75f6-4c4a-af3d-16aec5a221f4","Type":"ContainerStarted","Data":"90c54b39b42c73082f43ed2105ae72b9c62f82eb4b2d22238ce3746be666885c"} Feb 03 10:21:43 crc kubenswrapper[5010]: I0203 10:21:43.574141 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:21:43 crc kubenswrapper[5010]: I0203 10:21:43.576669 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7" event={"ID":"e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58","Type":"ContainerStarted","Data":"4784f5c4498de9a6f020a44a7d688652b4c1a311da24c75fd931088641891823"} Feb 03 10:21:43 crc kubenswrapper[5010]: I0203 10:21:43.577590 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7" Feb 03 10:21:43 crc kubenswrapper[5010]: I0203 10:21:43.656987 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" podStartSLOduration=38.575914227 podStartE2EDuration="46.656955677s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:34.608103167 +0000 UTC m=+1164.764079296" lastFinishedPulling="2026-02-03 10:21:42.689144617 +0000 UTC m=+1172.845120746" observedRunningTime="2026-02-03 10:21:43.636846909 +0000 UTC m=+1173.792823038" watchObservedRunningTime="2026-02-03 10:21:43.656955677 +0000 UTC m=+1173.812931806" Feb 03 10:21:43 crc kubenswrapper[5010]: I0203 10:21:43.662190 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7" podStartSLOduration=5.126418678 podStartE2EDuration="46.662168411s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:01.154570965 +0000 UTC m=+1131.310547104" lastFinishedPulling="2026-02-03 10:21:42.690320708 +0000 UTC m=+1172.846296837" observedRunningTime="2026-02-03 10:21:43.657346167 +0000 UTC m=+1173.813322296" watchObservedRunningTime="2026-02-03 10:21:43.662168411 +0000 UTC m=+1173.818144540" Feb 03 10:21:45 crc kubenswrapper[5010]: I0203 10:21:45.623206 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" event={"ID":"5fafda3f-e0cd-4477-9c10-442af83a835b","Type":"ContainerStarted","Data":"357d69064366b42c434470b17e36ea54790f1db35252cfc36c0312f802d971a9"} Feb 03 10:21:45 crc kubenswrapper[5010]: I0203 10:21:45.623671 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:21:45 crc kubenswrapper[5010]: I0203 10:21:45.625537 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj" event={"ID":"2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a","Type":"ContainerStarted","Data":"107ab28d6491cdb0d441844d7e5d6fcf9652c74749468dd746510ff199dc9cc2"} Feb 03 10:21:45 crc kubenswrapper[5010]: I0203 10:21:45.714764 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" podStartSLOduration=39.245763804 podStartE2EDuration="48.714750235s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:35.138357806 +0000 UTC m=+1165.294333935" lastFinishedPulling="2026-02-03 10:21:44.607344237 +0000 UTC m=+1174.763320366" observedRunningTime="2026-02-03 10:21:45.711749868 +0000 UTC m=+1175.867725997" watchObservedRunningTime="2026-02-03 10:21:45.714750235 +0000 UTC m=+1175.870726364" Feb 03 10:21:45 crc kubenswrapper[5010]: I0203 10:21:45.739957 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj" podStartSLOduration=3.572468826 podStartE2EDuration="47.739940084s" podCreationTimestamp="2026-02-03 10:20:58 +0000 UTC" firstStartedPulling="2026-02-03 10:21:00.918722453 +0000 UTC m=+1131.074698582" lastFinishedPulling="2026-02-03 10:21:45.086193711 +0000 UTC m=+1175.242169840" observedRunningTime="2026-02-03 10:21:45.737389409 +0000 UTC m=+1175.893365538" watchObservedRunningTime="2026-02-03 10:21:45.739940084 +0000 UTC m=+1175.895916213" Feb 03 10:21:46 crc kubenswrapper[5010]: E0203 10:21:46.503494 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" podUID="4f112d60-8db7-4ec2-a82d-c7627ade05a3" Feb 03 10:21:46 crc kubenswrapper[5010]: I0203 10:21:46.874757 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp" event={"ID":"1a136ea1-ab68-4f60-8fb2-969363f25337","Type":"ContainerStarted","Data":"490269a44640bb9f1a9df7f0d361e3bafd3214e09c6d6e9bdb60100714018690"} Feb 03 10:21:46 crc kubenswrapper[5010]: I0203 10:21:46.875553 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp" Feb 03 10:21:46 crc kubenswrapper[5010]: I0203 10:21:46.924024 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp" podStartSLOduration=3.586910559 podStartE2EDuration="49.923998899s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:20:59.801514512 +0000 UTC m=+1129.957490641" lastFinishedPulling="2026-02-03 10:21:46.138602852 +0000 UTC m=+1176.294578981" observedRunningTime="2026-02-03 10:21:46.898537162 +0000 UTC m=+1177.054513291" watchObservedRunningTime="2026-02-03 10:21:46.923998899 +0000 UTC m=+1177.079975028" Feb 03 10:21:47 crc kubenswrapper[5010]: I0203 10:21:47.422010 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72" Feb 03 10:21:47 crc kubenswrapper[5010]: I0203 10:21:47.431106 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56" Feb 03 10:21:47 crc kubenswrapper[5010]: I0203 10:21:47.483411 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws" Feb 03 10:21:47 crc kubenswrapper[5010]: I0203 10:21:47.517688 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc" Feb 03 10:21:47 crc kubenswrapper[5010]: I0203 10:21:47.693608 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs" Feb 03 10:21:47 crc kubenswrapper[5010]: I0203 10:21:47.697335 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q" Feb 03 10:21:47 crc kubenswrapper[5010]: I0203 10:21:47.945694 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz" Feb 03 10:21:48 crc kubenswrapper[5010]: I0203 10:21:48.422792 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw" Feb 03 10:21:48 crc kubenswrapper[5010]: I0203 10:21:48.427419 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl" Feb 03 10:21:48 crc kubenswrapper[5010]: I0203 10:21:48.428194 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" Feb 03 10:21:48 crc kubenswrapper[5010]: I0203 10:21:48.470809 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8" Feb 03 10:21:48 crc kubenswrapper[5010]: I0203 10:21:48.748835 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7" Feb 03 10:21:49 crc kubenswrapper[5010]: I0203 10:21:49.181389 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6" Feb 03 10:21:49 crc kubenswrapper[5010]: I0203 10:21:49.477534 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:21:49 crc kubenswrapper[5010]: I0203 10:21:49.490355 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj" Feb 03 10:21:50 crc kubenswrapper[5010]: I0203 10:21:50.120406 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:21:50 crc kubenswrapper[5010]: I0203 10:21:50.769375 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:57 crc kubenswrapper[5010]: I0203 10:21:57.649735 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp" Feb 03 10:21:58 crc kubenswrapper[5010]: I0203 10:21:58.952469 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" event={"ID":"84af1f21-c29e-4846-9ce1-ea345cbad4fc","Type":"ContainerStarted","Data":"0cc548162ab45320514953fc43721b0edb440b21ad672ac052b4678a26b3d148"} Feb 03 10:21:58 crc kubenswrapper[5010]: I0203 10:21:58.953553 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" Feb 03 10:21:58 crc kubenswrapper[5010]: I0203 10:21:58.967342 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" podStartSLOduration=5.091378551 podStartE2EDuration="1m1.967321317s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:01.185631662 +0000 UTC m=+1131.341607791" lastFinishedPulling="2026-02-03 10:21:58.061574408 +0000 UTC m=+1188.217550557" observedRunningTime="2026-02-03 10:21:58.965483889 +0000 UTC m=+1189.121460028" watchObservedRunningTime="2026-02-03 10:21:58.967321317 +0000 UTC m=+1189.123297446" Feb 03 10:22:00 crc kubenswrapper[5010]: I0203 10:22:00.965118 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" event={"ID":"4f112d60-8db7-4ec2-a82d-c7627ade05a3","Type":"ContainerStarted","Data":"c84c019987a09666223fed742f6b03c976bf9021baab3ece6c66c96a4a605018"} Feb 03 10:22:00 crc kubenswrapper[5010]: I0203 10:22:00.965581 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" Feb 03 10:22:00 crc kubenswrapper[5010]: I0203 10:22:00.984437 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" podStartSLOduration=5.221696533 podStartE2EDuration="1m3.984415427s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:01.178795777 +0000 UTC m=+1131.334771906" lastFinishedPulling="2026-02-03 10:21:59.941514671 +0000 UTC m=+1190.097490800" observedRunningTime="2026-02-03 10:22:00.978359821 +0000 UTC m=+1191.134335950" watchObservedRunningTime="2026-02-03 10:22:00.984415427 +0000 UTC m=+1191.140391556" Feb 03 10:22:08 crc kubenswrapper[5010]: I0203 10:22:08.426790 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" Feb 03 10:22:09 crc kubenswrapper[5010]: I0203 10:22:09.147629 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.440059 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lkm9t"] Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.445478 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.447590 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-tzv47" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.447935 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.447984 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.449552 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.453142 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lkm9t"] Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.495038 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k9cm6"] Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.496259 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.498460 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.506841 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k9cm6"] Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.597120 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrz69\" (UniqueName: \"kubernetes.io/projected/05e75df7-a63f-4821-8aa1-79b20fe2e100-kube-api-access-hrz69\") pod \"dnsmasq-dns-675f4bcbfc-lkm9t\" (UID: \"05e75df7-a63f-4821-8aa1-79b20fe2e100\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.597417 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cjqt\" (UniqueName: \"kubernetes.io/projected/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-kube-api-access-4cjqt\") pod \"dnsmasq-dns-78dd6ddcc-k9cm6\" (UID: \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.597516 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-config\") pod \"dnsmasq-dns-78dd6ddcc-k9cm6\" (UID: \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.597726 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05e75df7-a63f-4821-8aa1-79b20fe2e100-config\") pod \"dnsmasq-dns-675f4bcbfc-lkm9t\" (UID: \"05e75df7-a63f-4821-8aa1-79b20fe2e100\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.597804 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-k9cm6\" (UID: \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.699709 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05e75df7-a63f-4821-8aa1-79b20fe2e100-config\") pod \"dnsmasq-dns-675f4bcbfc-lkm9t\" (UID: \"05e75df7-a63f-4821-8aa1-79b20fe2e100\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.699755 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-k9cm6\" (UID: \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.699800 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrz69\" (UniqueName: \"kubernetes.io/projected/05e75df7-a63f-4821-8aa1-79b20fe2e100-kube-api-access-hrz69\") pod \"dnsmasq-dns-675f4bcbfc-lkm9t\" (UID: \"05e75df7-a63f-4821-8aa1-79b20fe2e100\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.699845 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cjqt\" (UniqueName: \"kubernetes.io/projected/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-kube-api-access-4cjqt\") pod \"dnsmasq-dns-78dd6ddcc-k9cm6\" (UID: \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.699863 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-config\") pod \"dnsmasq-dns-78dd6ddcc-k9cm6\" (UID: \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.700763 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-config\") pod \"dnsmasq-dns-78dd6ddcc-k9cm6\" (UID: \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.700981 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-k9cm6\" (UID: \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.701571 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05e75df7-a63f-4821-8aa1-79b20fe2e100-config\") pod \"dnsmasq-dns-675f4bcbfc-lkm9t\" (UID: \"05e75df7-a63f-4821-8aa1-79b20fe2e100\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.718588 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cjqt\" (UniqueName: \"kubernetes.io/projected/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-kube-api-access-4cjqt\") pod \"dnsmasq-dns-78dd6ddcc-k9cm6\" (UID: \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.720330 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrz69\" (UniqueName: \"kubernetes.io/projected/05e75df7-a63f-4821-8aa1-79b20fe2e100-kube-api-access-hrz69\") pod \"dnsmasq-dns-675f4bcbfc-lkm9t\" (UID: \"05e75df7-a63f-4821-8aa1-79b20fe2e100\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.763907 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.809751 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:22:24 crc kubenswrapper[5010]: I0203 10:22:24.577648 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lkm9t"] Feb 03 10:22:24 crc kubenswrapper[5010]: I0203 10:22:24.677530 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k9cm6"] Feb 03 10:22:25 crc kubenswrapper[5010]: I0203 10:22:25.257085 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" event={"ID":"05e75df7-a63f-4821-8aa1-79b20fe2e100","Type":"ContainerStarted","Data":"9e3776a5d3f524e0c405d299c28cd32959ccfee9a9abe7e9369d1c2023e2ff59"} Feb 03 10:22:25 crc kubenswrapper[5010]: I0203 10:22:25.259666 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" event={"ID":"6fec8d31-6436-4bfa-aae8-154ca2b74cf2","Type":"ContainerStarted","Data":"d7f9681b86e8830df0ea7e53a19e40fbea0d9f1b8f5d34f7c2f7074013fa6ad9"} Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.559505 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lkm9t"] Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.585632 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-kpzlc"] Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.590793 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.624775 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-kpzlc"] Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.686896 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86085e66-cdd4-45aa-af20-f8856cdfed1c-config\") pod \"dnsmasq-dns-666b6646f7-kpzlc\" (UID: \"86085e66-cdd4-45aa-af20-f8856cdfed1c\") " pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.686962 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86085e66-cdd4-45aa-af20-f8856cdfed1c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-kpzlc\" (UID: \"86085e66-cdd4-45aa-af20-f8856cdfed1c\") " pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.686988 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29t54\" (UniqueName: \"kubernetes.io/projected/86085e66-cdd4-45aa-af20-f8856cdfed1c-kube-api-access-29t54\") pod \"dnsmasq-dns-666b6646f7-kpzlc\" (UID: \"86085e66-cdd4-45aa-af20-f8856cdfed1c\") " pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.787866 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86085e66-cdd4-45aa-af20-f8856cdfed1c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-kpzlc\" (UID: \"86085e66-cdd4-45aa-af20-f8856cdfed1c\") " pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.787924 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29t54\" (UniqueName: \"kubernetes.io/projected/86085e66-cdd4-45aa-af20-f8856cdfed1c-kube-api-access-29t54\") pod \"dnsmasq-dns-666b6646f7-kpzlc\" (UID: \"86085e66-cdd4-45aa-af20-f8856cdfed1c\") " pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.788020 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86085e66-cdd4-45aa-af20-f8856cdfed1c-config\") pod \"dnsmasq-dns-666b6646f7-kpzlc\" (UID: \"86085e66-cdd4-45aa-af20-f8856cdfed1c\") " pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.789070 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86085e66-cdd4-45aa-af20-f8856cdfed1c-config\") pod \"dnsmasq-dns-666b6646f7-kpzlc\" (UID: \"86085e66-cdd4-45aa-af20-f8856cdfed1c\") " pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.789763 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86085e66-cdd4-45aa-af20-f8856cdfed1c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-kpzlc\" (UID: \"86085e66-cdd4-45aa-af20-f8856cdfed1c\") " pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.812987 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k9cm6"] Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.816525 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29t54\" (UniqueName: \"kubernetes.io/projected/86085e66-cdd4-45aa-af20-f8856cdfed1c-kube-api-access-29t54\") pod \"dnsmasq-dns-666b6646f7-kpzlc\" (UID: \"86085e66-cdd4-45aa-af20-f8856cdfed1c\") " pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.860515 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-g56qr"] Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.863646 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.865785 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-g56qr"] Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.890283 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e75b7259-a771-487b-9d36-990ce8571c11-config\") pod \"dnsmasq-dns-57d769cc4f-g56qr\" (UID: \"e75b7259-a771-487b-9d36-990ce8571c11\") " pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.890336 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e75b7259-a771-487b-9d36-990ce8571c11-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-g56qr\" (UID: \"e75b7259-a771-487b-9d36-990ce8571c11\") " pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.890359 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64qtv\" (UniqueName: \"kubernetes.io/projected/e75b7259-a771-487b-9d36-990ce8571c11-kube-api-access-64qtv\") pod \"dnsmasq-dns-57d769cc4f-g56qr\" (UID: \"e75b7259-a771-487b-9d36-990ce8571c11\") " pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.928654 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.994490 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e75b7259-a771-487b-9d36-990ce8571c11-config\") pod \"dnsmasq-dns-57d769cc4f-g56qr\" (UID: \"e75b7259-a771-487b-9d36-990ce8571c11\") " pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.994559 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e75b7259-a771-487b-9d36-990ce8571c11-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-g56qr\" (UID: \"e75b7259-a771-487b-9d36-990ce8571c11\") " pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.994588 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64qtv\" (UniqueName: \"kubernetes.io/projected/e75b7259-a771-487b-9d36-990ce8571c11-kube-api-access-64qtv\") pod \"dnsmasq-dns-57d769cc4f-g56qr\" (UID: \"e75b7259-a771-487b-9d36-990ce8571c11\") " pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.995474 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e75b7259-a771-487b-9d36-990ce8571c11-config\") pod \"dnsmasq-dns-57d769cc4f-g56qr\" (UID: \"e75b7259-a771-487b-9d36-990ce8571c11\") " pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.995777 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e75b7259-a771-487b-9d36-990ce8571c11-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-g56qr\" (UID: \"e75b7259-a771-487b-9d36-990ce8571c11\") " pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.014552 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64qtv\" (UniqueName: \"kubernetes.io/projected/e75b7259-a771-487b-9d36-990ce8571c11-kube-api-access-64qtv\") pod \"dnsmasq-dns-57d769cc4f-g56qr\" (UID: \"e75b7259-a771-487b-9d36-990ce8571c11\") " pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.208722 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.522899 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-kpzlc"] Feb 03 10:22:27 crc kubenswrapper[5010]: W0203 10:22:27.538769 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86085e66_cdd4_45aa_af20_f8856cdfed1c.slice/crio-e7f926e73e67c36bc02fcc6793463e0a1d4e2f826cfb6f5739264417666543a5 WatchSource:0}: Error finding container e7f926e73e67c36bc02fcc6793463e0a1d4e2f826cfb6f5739264417666543a5: Status 404 returned error can't find the container with id e7f926e73e67c36bc02fcc6793463e0a1d4e2f826cfb6f5739264417666543a5 Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.688599 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.689673 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.699157 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.699439 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.699600 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.702093 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.709240 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-9nfm9" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.709274 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.709386 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.709240 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.725640 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-g56qr"] Feb 03 10:22:27 crc kubenswrapper[5010]: W0203 10:22:27.746547 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode75b7259_a771_487b_9d36_990ce8571c11.slice/crio-474180be2209d7238391d27eab7728591f11004bc751b0c6114b9196608f8e03 WatchSource:0}: Error finding container 474180be2209d7238391d27eab7728591f11004bc751b0c6114b9196608f8e03: Status 404 returned error can't find the container with id 474180be2209d7238391d27eab7728591f11004bc751b0c6114b9196608f8e03 Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.821009 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ce83ed2-cbef-4045-8822-6f58268b28b3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.821068 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-config-data\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.821126 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ce83ed2-cbef-4045-8822-6f58268b28b3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.821153 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.821182 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.821223 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.821256 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5rwd\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-kube-api-access-m5rwd\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.821280 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.821313 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.821348 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.821380 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.923381 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.923655 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.923706 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ce83ed2-cbef-4045-8822-6f58268b28b3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.923727 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-config-data\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.923766 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ce83ed2-cbef-4045-8822-6f58268b28b3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.923782 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.923804 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.923821 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.923845 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5rwd\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-kube-api-access-m5rwd\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.923863 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.923888 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.924649 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.924683 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.926714 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.926879 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-config-data\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.927091 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.927812 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.930015 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.934626 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.935530 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ce83ed2-cbef-4045-8822-6f58268b28b3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.943806 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5rwd\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-kube-api-access-m5rwd\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.948573 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ce83ed2-cbef-4045-8822-6f58268b28b3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.950080 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.982501 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.988974 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.992206 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-ld7g9" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.995274 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.000335 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.000368 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.000335 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.000697 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.000713 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.007157 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.035822 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.126988 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.127034 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.127076 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.127104 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.127175 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.127200 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.127232 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkwkl\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-kube-api-access-qkwkl\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.127260 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.127286 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.127310 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.127368 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.350638 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.350685 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.350705 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.350724 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.350755 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.350772 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.350788 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkwkl\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-kube-api-access-qkwkl\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.350811 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.350836 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.350868 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.350889 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.352081 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.352158 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.352207 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.352433 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.352433 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.353086 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.358764 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.359547 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.364533 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.367070 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.375702 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" event={"ID":"86085e66-cdd4-45aa-af20-f8856cdfed1c","Type":"ContainerStarted","Data":"e7f926e73e67c36bc02fcc6793463e0a1d4e2f826cfb6f5739264417666543a5"} Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.375803 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" event={"ID":"e75b7259-a771-487b-9d36-990ce8571c11","Type":"ContainerStarted","Data":"474180be2209d7238391d27eab7728591f11004bc751b0c6114b9196608f8e03"} Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.377573 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkwkl\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-kube-api-access-qkwkl\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.383312 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.617028 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.921415 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.922999 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.928096 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.939079 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.939154 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.939435 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-9rf4l" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.940577 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.966265 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.987013 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.110196 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/449f0b91-9186-4a16-b1b4-7f199b57a428-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.110469 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/449f0b91-9186-4a16-b1b4-7f199b57a428-kolla-config\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.110490 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bng9\" (UniqueName: \"kubernetes.io/projected/449f0b91-9186-4a16-b1b4-7f199b57a428-kube-api-access-6bng9\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.110511 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/449f0b91-9186-4a16-b1b4-7f199b57a428-operator-scripts\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.110544 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/449f0b91-9186-4a16-b1b4-7f199b57a428-config-data-generated\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.110590 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.110624 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/449f0b91-9186-4a16-b1b4-7f199b57a428-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.110665 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/449f0b91-9186-4a16-b1b4-7f199b57a428-config-data-default\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.211249 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/449f0b91-9186-4a16-b1b4-7f199b57a428-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.211297 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/449f0b91-9186-4a16-b1b4-7f199b57a428-config-data-default\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.211328 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/449f0b91-9186-4a16-b1b4-7f199b57a428-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.211356 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/449f0b91-9186-4a16-b1b4-7f199b57a428-kolla-config\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.211376 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bng9\" (UniqueName: \"kubernetes.io/projected/449f0b91-9186-4a16-b1b4-7f199b57a428-kube-api-access-6bng9\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.211396 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/449f0b91-9186-4a16-b1b4-7f199b57a428-operator-scripts\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.211425 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/449f0b91-9186-4a16-b1b4-7f199b57a428-config-data-generated\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.211465 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.211616 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.212966 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/449f0b91-9186-4a16-b1b4-7f199b57a428-kolla-config\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.213773 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/449f0b91-9186-4a16-b1b4-7f199b57a428-config-data-default\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.222716 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/449f0b91-9186-4a16-b1b4-7f199b57a428-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.224561 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/449f0b91-9186-4a16-b1b4-7f199b57a428-operator-scripts\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.225199 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/449f0b91-9186-4a16-b1b4-7f199b57a428-config-data-generated\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.247027 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/449f0b91-9186-4a16-b1b4-7f199b57a428-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.274482 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bng9\" (UniqueName: \"kubernetes.io/projected/449f0b91-9186-4a16-b1b4-7f199b57a428-kube-api-access-6bng9\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.289374 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.561993 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.596915 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ce83ed2-cbef-4045-8822-6f58268b28b3","Type":"ContainerStarted","Data":"97cdcebe285a4f7a484868c96029b1b0d97151d7f63016f73836ed870ad4197d"} Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.279321 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 10:22:30 crc kubenswrapper[5010]: W0203 10:22:30.353031 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2066c8b_8b89_4dcb_972d_aea4dcd1c105.slice/crio-6f662c0876b2bb6a1a91c65ab1f7cf8a34f9b5b27a5996afb9426d7a8621423b WatchSource:0}: Error finding container 6f662c0876b2bb6a1a91c65ab1f7cf8a34f9b5b27a5996afb9426d7a8621423b: Status 404 returned error can't find the container with id 6f662c0876b2bb6a1a91c65ab1f7cf8a34f9b5b27a5996afb9426d7a8621423b Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.761493 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.772675 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f2066c8b-8b89-4dcb-972d-aea4dcd1c105","Type":"ContainerStarted","Data":"6f662c0876b2bb6a1a91c65ab1f7cf8a34f9b5b27a5996afb9426d7a8621423b"} Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.772900 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.772917 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.773516 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.779165 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.779400 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.783725 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-b99c2" Feb 03 10:22:30 crc kubenswrapper[5010]: W0203 10:22:30.798860 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod449f0b91_9186_4a16_b1b4_7f199b57a428.slice/crio-ada8e281fc672f2f7e83dfdda7529a9550b6b63bc9b50aeea13aa8c29edd7a6f WatchSource:0}: Error finding container ada8e281fc672f2f7e83dfdda7529a9550b6b63bc9b50aeea13aa8c29edd7a6f: Status 404 returned error can't find the container with id ada8e281fc672f2f7e83dfdda7529a9550b6b63bc9b50aeea13aa8c29edd7a6f Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.862326 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.864773 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.874274 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-vvbf9" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.874776 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.874978 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.875139 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.880812 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.967751 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/87eb5dd8-7171-457a-8a95-eda98893319a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.967854 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/95adc2d1-1093-484e-8580-53e244b420c8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.967901 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/95adc2d1-1093-484e-8580-53e244b420c8-kolla-config\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.967975 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpvhp\" (UniqueName: \"kubernetes.io/projected/95adc2d1-1093-484e-8580-53e244b420c8-kube-api-access-xpvhp\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.968046 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/87eb5dd8-7171-457a-8a95-eda98893319a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.968074 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/87eb5dd8-7171-457a-8a95-eda98893319a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.968099 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95adc2d1-1093-484e-8580-53e244b420c8-config-data\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.968277 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95adc2d1-1093-484e-8580-53e244b420c8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.968355 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87eb5dd8-7171-457a-8a95-eda98893319a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.968538 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.968804 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87eb5dd8-7171-457a-8a95-eda98893319a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.968835 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ddg4\" (UniqueName: \"kubernetes.io/projected/87eb5dd8-7171-457a-8a95-eda98893319a-kube-api-access-8ddg4\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.968911 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/87eb5dd8-7171-457a-8a95-eda98893319a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.070875 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.070929 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87eb5dd8-7171-457a-8a95-eda98893319a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.070982 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ddg4\" (UniqueName: \"kubernetes.io/projected/87eb5dd8-7171-457a-8a95-eda98893319a-kube-api-access-8ddg4\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.071011 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/87eb5dd8-7171-457a-8a95-eda98893319a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.071061 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/87eb5dd8-7171-457a-8a95-eda98893319a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.071123 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/95adc2d1-1093-484e-8580-53e244b420c8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.071567 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/95adc2d1-1093-484e-8580-53e244b420c8-kolla-config\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.071607 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpvhp\" (UniqueName: \"kubernetes.io/projected/95adc2d1-1093-484e-8580-53e244b420c8-kube-api-access-xpvhp\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.071637 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/87eb5dd8-7171-457a-8a95-eda98893319a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.071664 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/87eb5dd8-7171-457a-8a95-eda98893319a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.071689 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95adc2d1-1093-484e-8580-53e244b420c8-config-data\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.071735 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95adc2d1-1093-484e-8580-53e244b420c8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.071758 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87eb5dd8-7171-457a-8a95-eda98893319a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.072124 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.072432 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/87eb5dd8-7171-457a-8a95-eda98893319a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.073229 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/87eb5dd8-7171-457a-8a95-eda98893319a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.073390 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/95adc2d1-1093-484e-8580-53e244b420c8-kolla-config\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.073662 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95adc2d1-1093-484e-8580-53e244b420c8-config-data\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.076523 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/87eb5dd8-7171-457a-8a95-eda98893319a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.086444 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87eb5dd8-7171-457a-8a95-eda98893319a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.116878 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/87eb5dd8-7171-457a-8a95-eda98893319a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.129845 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.130633 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ddg4\" (UniqueName: \"kubernetes.io/projected/87eb5dd8-7171-457a-8a95-eda98893319a-kube-api-access-8ddg4\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.186770 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95adc2d1-1093-484e-8580-53e244b420c8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.190821 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/95adc2d1-1093-484e-8580-53e244b420c8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.192541 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87eb5dd8-7171-457a-8a95-eda98893319a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.194479 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpvhp\" (UniqueName: \"kubernetes.io/projected/95adc2d1-1093-484e-8580-53e244b420c8-kube-api-access-xpvhp\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.199040 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.457048 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.715912 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"449f0b91-9186-4a16-b1b4-7f199b57a428","Type":"ContainerStarted","Data":"ada8e281fc672f2f7e83dfdda7529a9550b6b63bc9b50aeea13aa8c29edd7a6f"} Feb 03 10:22:32 crc kubenswrapper[5010]: I0203 10:22:32.573423 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 03 10:22:32 crc kubenswrapper[5010]: W0203 10:22:32.615798 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87eb5dd8_7171_457a_8a95_eda98893319a.slice/crio-ac746dd6dbe76f98fc4607d0c5969d9b64edb8eb2959f5f5320b75e4d2506d61 WatchSource:0}: Error finding container ac746dd6dbe76f98fc4607d0c5969d9b64edb8eb2959f5f5320b75e4d2506d61: Status 404 returned error can't find the container with id ac746dd6dbe76f98fc4607d0c5969d9b64edb8eb2959f5f5320b75e4d2506d61 Feb 03 10:22:32 crc kubenswrapper[5010]: I0203 10:22:32.900257 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"87eb5dd8-7171-457a-8a95-eda98893319a","Type":"ContainerStarted","Data":"ac746dd6dbe76f98fc4607d0c5969d9b64edb8eb2959f5f5320b75e4d2506d61"} Feb 03 10:22:32 crc kubenswrapper[5010]: I0203 10:22:32.914055 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 03 10:22:32 crc kubenswrapper[5010]: W0203 10:22:32.927455 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95adc2d1_1093_484e_8580_53e244b420c8.slice/crio-44a0cbbfa053a4752d74f2cc1c60947bdf93e07f00c67505fbedc9b010e9ea12 WatchSource:0}: Error finding container 44a0cbbfa053a4752d74f2cc1c60947bdf93e07f00c67505fbedc9b010e9ea12: Status 404 returned error can't find the container with id 44a0cbbfa053a4752d74f2cc1c60947bdf93e07f00c67505fbedc9b010e9ea12 Feb 03 10:22:33 crc kubenswrapper[5010]: I0203 10:22:33.470821 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 10:22:33 crc kubenswrapper[5010]: I0203 10:22:33.472093 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 10:22:33 crc kubenswrapper[5010]: I0203 10:22:33.475371 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-k6brw" Feb 03 10:22:33 crc kubenswrapper[5010]: I0203 10:22:33.492023 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 10:22:33 crc kubenswrapper[5010]: I0203 10:22:33.615794 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxkf4\" (UniqueName: \"kubernetes.io/projected/7b0ebfb6-7019-4de6-88df-b2161da95e9b-kube-api-access-lxkf4\") pod \"kube-state-metrics-0\" (UID: \"7b0ebfb6-7019-4de6-88df-b2161da95e9b\") " pod="openstack/kube-state-metrics-0" Feb 03 10:22:33 crc kubenswrapper[5010]: I0203 10:22:33.796607 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxkf4\" (UniqueName: \"kubernetes.io/projected/7b0ebfb6-7019-4de6-88df-b2161da95e9b-kube-api-access-lxkf4\") pod \"kube-state-metrics-0\" (UID: \"7b0ebfb6-7019-4de6-88df-b2161da95e9b\") " pod="openstack/kube-state-metrics-0" Feb 03 10:22:33 crc kubenswrapper[5010]: I0203 10:22:33.823007 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxkf4\" (UniqueName: \"kubernetes.io/projected/7b0ebfb6-7019-4de6-88df-b2161da95e9b-kube-api-access-lxkf4\") pod \"kube-state-metrics-0\" (UID: \"7b0ebfb6-7019-4de6-88df-b2161da95e9b\") " pod="openstack/kube-state-metrics-0" Feb 03 10:22:34 crc kubenswrapper[5010]: I0203 10:22:34.033083 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"95adc2d1-1093-484e-8580-53e244b420c8","Type":"ContainerStarted","Data":"44a0cbbfa053a4752d74f2cc1c60947bdf93e07f00c67505fbedc9b010e9ea12"} Feb 03 10:22:34 crc kubenswrapper[5010]: I0203 10:22:34.101354 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 10:22:35 crc kubenswrapper[5010]: I0203 10:22:35.283625 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.187247 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7b0ebfb6-7019-4de6-88df-b2161da95e9b","Type":"ContainerStarted","Data":"99eae2ce273fff1db7b69f1325ef839ad84ecc780d3634ec59776f868fb7d556"} Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.237648 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ql6ht"] Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.239030 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.247015 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-pwcwc" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.247278 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.247436 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.267765 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ql6ht"] Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.307857 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-krnr5"] Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.310567 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.324226 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-krnr5"] Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485315 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b2780eb3-7b7a-47fe-bda0-2605419df774-var-run\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485383 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b2780eb3-7b7a-47fe-bda0-2605419df774-var-lib\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485417 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/1883c30e-4c38-468d-a5dc-91b07f167d67-ovn-controller-tls-certs\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485450 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1883c30e-4c38-468d-a5dc-91b07f167d67-var-run\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485475 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1883c30e-4c38-468d-a5dc-91b07f167d67-var-log-ovn\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485509 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b2780eb3-7b7a-47fe-bda0-2605419df774-scripts\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485565 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1883c30e-4c38-468d-a5dc-91b07f167d67-var-run-ovn\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485590 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b2780eb3-7b7a-47fe-bda0-2605419df774-etc-ovs\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485615 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7xp5\" (UniqueName: \"kubernetes.io/projected/1883c30e-4c38-468d-a5dc-91b07f167d67-kube-api-access-d7xp5\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485651 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1883c30e-4c38-468d-a5dc-91b07f167d67-combined-ca-bundle\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485686 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b2780eb3-7b7a-47fe-bda0-2605419df774-var-log\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485745 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f2fk\" (UniqueName: \"kubernetes.io/projected/b2780eb3-7b7a-47fe-bda0-2605419df774-kube-api-access-7f2fk\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485769 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1883c30e-4c38-468d-a5dc-91b07f167d67-scripts\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.591633 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f2fk\" (UniqueName: \"kubernetes.io/projected/b2780eb3-7b7a-47fe-bda0-2605419df774-kube-api-access-7f2fk\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.591720 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1883c30e-4c38-468d-a5dc-91b07f167d67-scripts\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.591771 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b2780eb3-7b7a-47fe-bda0-2605419df774-var-run\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.591799 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b2780eb3-7b7a-47fe-bda0-2605419df774-var-lib\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.591842 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/1883c30e-4c38-468d-a5dc-91b07f167d67-ovn-controller-tls-certs\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.591861 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1883c30e-4c38-468d-a5dc-91b07f167d67-var-run\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.591875 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1883c30e-4c38-468d-a5dc-91b07f167d67-var-log-ovn\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.591894 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b2780eb3-7b7a-47fe-bda0-2605419df774-scripts\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.591950 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1883c30e-4c38-468d-a5dc-91b07f167d67-var-run-ovn\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.591968 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b2780eb3-7b7a-47fe-bda0-2605419df774-etc-ovs\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.591985 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7xp5\" (UniqueName: \"kubernetes.io/projected/1883c30e-4c38-468d-a5dc-91b07f167d67-kube-api-access-d7xp5\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.592009 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1883c30e-4c38-468d-a5dc-91b07f167d67-combined-ca-bundle\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.592082 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b2780eb3-7b7a-47fe-bda0-2605419df774-var-log\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.592673 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b2780eb3-7b7a-47fe-bda0-2605419df774-var-log\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.592847 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b2780eb3-7b7a-47fe-bda0-2605419df774-var-run\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.592949 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b2780eb3-7b7a-47fe-bda0-2605419df774-var-lib\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.596602 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1883c30e-4c38-468d-a5dc-91b07f167d67-scripts\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.597195 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1883c30e-4c38-468d-a5dc-91b07f167d67-var-run-ovn\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.597291 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1883c30e-4c38-468d-a5dc-91b07f167d67-var-run\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.597395 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1883c30e-4c38-468d-a5dc-91b07f167d67-var-log-ovn\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.599792 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b2780eb3-7b7a-47fe-bda0-2605419df774-etc-ovs\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.600722 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/1883c30e-4c38-468d-a5dc-91b07f167d67-ovn-controller-tls-certs\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.604551 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b2780eb3-7b7a-47fe-bda0-2605419df774-scripts\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.615425 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1883c30e-4c38-468d-a5dc-91b07f167d67-combined-ca-bundle\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.737384 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f2fk\" (UniqueName: \"kubernetes.io/projected/b2780eb3-7b7a-47fe-bda0-2605419df774-kube-api-access-7f2fk\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.749060 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7xp5\" (UniqueName: \"kubernetes.io/projected/1883c30e-4c38-468d-a5dc-91b07f167d67-kube-api-access-d7xp5\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.870724 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.935970 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.253421 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.255112 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.260380 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.414119 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.414440 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.414593 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.416400 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.416466 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-btqnv" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.535022 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.535396 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.535446 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.535473 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.535499 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.535531 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.535568 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-config\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.535668 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbzkw\" (UniqueName: \"kubernetes.io/projected/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-kube-api-access-cbzkw\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.758379 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.758454 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.758491 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.758530 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.758557 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.758582 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-config\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.758736 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzkw\" (UniqueName: \"kubernetes.io/projected/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-kube-api-access-cbzkw\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.758791 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.759556 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.760435 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-config\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.762179 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.766268 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.773848 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.775612 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.783832 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.789782 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbzkw\" (UniqueName: \"kubernetes.io/projected/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-kube-api-access-cbzkw\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.815844 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ql6ht"] Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.827450 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:38 crc kubenswrapper[5010]: I0203 10:22:38.064347 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:38 crc kubenswrapper[5010]: I0203 10:22:38.441162 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ql6ht" event={"ID":"1883c30e-4c38-468d-a5dc-91b07f167d67","Type":"ContainerStarted","Data":"df053411d5d4bb018dc2b0b44a4dbe6e7facb3606a27d941f148d61d371e3c8e"} Feb 03 10:22:38 crc kubenswrapper[5010]: I0203 10:22:38.852051 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-krnr5"] Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.605421 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-vqkq5"] Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.612196 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.624466 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.638393 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vqkq5"] Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.645959 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.699846 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5235b9fc-3723-4d8a-9851-e8ee89c0b084-config\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.699910 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5235b9fc-3723-4d8a-9851-e8ee89c0b084-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.700673 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcq87\" (UniqueName: \"kubernetes.io/projected/5235b9fc-3723-4d8a-9851-e8ee89c0b084-kube-api-access-mcq87\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.700728 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5235b9fc-3723-4d8a-9851-e8ee89c0b084-combined-ca-bundle\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.700747 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5235b9fc-3723-4d8a-9851-e8ee89c0b084-ovn-rundir\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.700793 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5235b9fc-3723-4d8a-9851-e8ee89c0b084-ovs-rundir\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.802973 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5235b9fc-3723-4d8a-9851-e8ee89c0b084-ovs-rundir\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.803016 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5235b9fc-3723-4d8a-9851-e8ee89c0b084-config\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.803042 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5235b9fc-3723-4d8a-9851-e8ee89c0b084-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.803108 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcq87\" (UniqueName: \"kubernetes.io/projected/5235b9fc-3723-4d8a-9851-e8ee89c0b084-kube-api-access-mcq87\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.803154 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5235b9fc-3723-4d8a-9851-e8ee89c0b084-combined-ca-bundle\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.803173 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5235b9fc-3723-4d8a-9851-e8ee89c0b084-ovn-rundir\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.803543 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5235b9fc-3723-4d8a-9851-e8ee89c0b084-ovn-rundir\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.803595 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5235b9fc-3723-4d8a-9851-e8ee89c0b084-ovs-rundir\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.804256 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5235b9fc-3723-4d8a-9851-e8ee89c0b084-config\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.811513 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5235b9fc-3723-4d8a-9851-e8ee89c0b084-combined-ca-bundle\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.810933 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5235b9fc-3723-4d8a-9851-e8ee89c0b084-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.830084 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcq87\" (UniqueName: \"kubernetes.io/projected/5235b9fc-3723-4d8a-9851-e8ee89c0b084-kube-api-access-mcq87\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.961844 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.279017 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-g56qr"] Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.297449 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-84hts"] Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.302609 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.309144 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.414397 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-84hts"] Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.444147 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-config\") pod \"dnsmasq-dns-7fd796d7df-84hts\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.444251 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-84hts\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.444325 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-84hts\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.445269 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59czz\" (UniqueName: \"kubernetes.io/projected/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-kube-api-access-59czz\") pod \"dnsmasq-dns-7fd796d7df-84hts\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.568969 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-config\") pod \"dnsmasq-dns-7fd796d7df-84hts\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.569027 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-84hts\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.569088 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-84hts\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.569147 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59czz\" (UniqueName: \"kubernetes.io/projected/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-kube-api-access-59czz\") pod \"dnsmasq-dns-7fd796d7df-84hts\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.574365 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-config\") pod \"dnsmasq-dns-7fd796d7df-84hts\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.576633 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-84hts\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.577801 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-84hts\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.597592 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59czz\" (UniqueName: \"kubernetes.io/projected/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-kube-api-access-59czz\") pod \"dnsmasq-dns-7fd796d7df-84hts\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.643058 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.756833 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.767499 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.770206 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-f9vnn" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.771131 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.772657 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.772822 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.792757 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.881263 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.881348 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzqcj\" (UniqueName: \"kubernetes.io/projected/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-kube-api-access-dzqcj\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.881412 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.881463 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.881529 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-config\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.881594 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.881643 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.881669 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.070788 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.070950 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-config\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.071093 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.071148 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.071185 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.071295 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.071329 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzqcj\" (UniqueName: \"kubernetes.io/projected/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-kube-api-access-dzqcj\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.071356 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.072263 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.098714 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.105843 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-config\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.110242 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.112482 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.113151 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.126828 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.136340 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzqcj\" (UniqueName: \"kubernetes.io/projected/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-kube-api-access-dzqcj\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.141240 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.391244 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:46 crc kubenswrapper[5010]: W0203 10:22:46.627436 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2780eb3_7b7a_47fe_bda0_2605419df774.slice/crio-1a90c9b04425811a0ef9e3b3afcff1dbb033a87c45ec805ac4dc4671e2408c1e WatchSource:0}: Error finding container 1a90c9b04425811a0ef9e3b3afcff1dbb033a87c45ec805ac4dc4671e2408c1e: Status 404 returned error can't find the container with id 1a90c9b04425811a0ef9e3b3afcff1dbb033a87c45ec805ac4dc4671e2408c1e Feb 03 10:22:46 crc kubenswrapper[5010]: W0203 10:22:46.630655 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d6abf1f_9905_4f96_8d44_d7ef3f9f299d.slice/crio-ea26054dd17af5b2535f663a7e1af4a481da73710705cbe70c508a6d73769fbd WatchSource:0}: Error finding container ea26054dd17af5b2535f663a7e1af4a481da73710705cbe70c508a6d73769fbd: Status 404 returned error can't find the container with id ea26054dd17af5b2535f663a7e1af4a481da73710705cbe70c508a6d73769fbd Feb 03 10:22:46 crc kubenswrapper[5010]: I0203 10:22:46.762824 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-krnr5" event={"ID":"b2780eb3-7b7a-47fe-bda0-2605419df774","Type":"ContainerStarted","Data":"1a90c9b04425811a0ef9e3b3afcff1dbb033a87c45ec805ac4dc4671e2408c1e"} Feb 03 10:22:46 crc kubenswrapper[5010]: I0203 10:22:46.764699 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d","Type":"ContainerStarted","Data":"ea26054dd17af5b2535f663a7e1af4a481da73710705cbe70c508a6d73769fbd"} Feb 03 10:22:47 crc kubenswrapper[5010]: I0203 10:22:47.987043 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-84hts"] Feb 03 10:22:53 crc kubenswrapper[5010]: E0203 10:22:53.474780 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Feb 03 10:22:53 crc kubenswrapper[5010]: E0203 10:22:53.475741 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5cfh5d5h695h6bh696h5h655h554h95h67h65bhf5h65fh567h545h5bbh67ch578h558h56h8hchf5h5bch59chbfh8bh667h647h5b6h79h5ffq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xpvhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(95adc2d1-1093-484e-8580-53e244b420c8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:22:53 crc kubenswrapper[5010]: E0203 10:22:53.476905 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="95adc2d1-1093-484e-8580-53e244b420c8" Feb 03 10:22:53 crc kubenswrapper[5010]: E0203 10:22:53.848530 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="95adc2d1-1093-484e-8580-53e244b420c8" Feb 03 10:22:54 crc kubenswrapper[5010]: E0203 10:22:54.801408 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 03 10:22:54 crc kubenswrapper[5010]: E0203 10:22:54.801967 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkwkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(f2066c8b-8b89-4dcb-972d-aea4dcd1c105): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:22:54 crc kubenswrapper[5010]: E0203 10:22:54.803636 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" Feb 03 10:22:54 crc kubenswrapper[5010]: E0203 10:22:54.836482 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 03 10:22:54 crc kubenswrapper[5010]: E0203 10:22:54.836721 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5rwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(2ce83ed2-cbef-4045-8822-6f58268b28b3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:22:54 crc kubenswrapper[5010]: E0203 10:22:54.837928 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="2ce83ed2-cbef-4045-8822-6f58268b28b3" Feb 03 10:22:54 crc kubenswrapper[5010]: E0203 10:22:54.854901 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="2ce83ed2-cbef-4045-8822-6f58268b28b3" Feb 03 10:22:54 crc kubenswrapper[5010]: E0203 10:22:54.855036 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" Feb 03 10:22:57 crc kubenswrapper[5010]: W0203 10:22:57.021637 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ea6e430_f9a6_4850_b58e_24ac04fd49a2.slice/crio-b237a98e3b61244f5b8cbba9933237b1c87653782e7c801f5d548e23ebd2e6d6 WatchSource:0}: Error finding container b237a98e3b61244f5b8cbba9933237b1c87653782e7c801f5d548e23ebd2e6d6: Status 404 returned error can't find the container with id b237a98e3b61244f5b8cbba9933237b1c87653782e7c801f5d548e23ebd2e6d6 Feb 03 10:22:57 crc kubenswrapper[5010]: E0203 10:22:57.048738 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 03 10:22:57 crc kubenswrapper[5010]: E0203 10:22:57.048986 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bng9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(449f0b91-9186-4a16-b1b4-7f199b57a428): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:22:57 crc kubenswrapper[5010]: E0203 10:22:57.050671 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="449f0b91-9186-4a16-b1b4-7f199b57a428" Feb 03 10:22:57 crc kubenswrapper[5010]: E0203 10:22:57.076430 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 03 10:22:57 crc kubenswrapper[5010]: E0203 10:22:57.076592 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ddg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(87eb5dd8-7171-457a-8a95-eda98893319a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:22:57 crc kubenswrapper[5010]: E0203 10:22:57.077684 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="87eb5dd8-7171-457a-8a95-eda98893319a" Feb 03 10:22:57 crc kubenswrapper[5010]: I0203 10:22:57.871824 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" event={"ID":"3ea6e430-f9a6-4850-b58e-24ac04fd49a2","Type":"ContainerStarted","Data":"b237a98e3b61244f5b8cbba9933237b1c87653782e7c801f5d548e23ebd2e6d6"} Feb 03 10:22:57 crc kubenswrapper[5010]: E0203 10:22:57.874847 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="87eb5dd8-7171-457a-8a95-eda98893319a" Feb 03 10:22:57 crc kubenswrapper[5010]: E0203 10:22:57.874914 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="449f0b91-9186-4a16-b1b4-7f199b57a428" Feb 03 10:22:58 crc kubenswrapper[5010]: E0203 10:22:58.627103 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" Feb 03 10:22:58 crc kubenswrapper[5010]: E0203 10:22:58.627273 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7ch64ch98h5fhbch679h649h548h55h5f4h5c8h7fh686h677h5c5h5bh5b7h657h67dh58bh77h68ch76h564h9h5fch5f7hb8h54ch649h98h74q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7f2fk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-krnr5_openstack(b2780eb3-7b7a-47fe-bda0-2605419df774): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:22:58 crc kubenswrapper[5010]: E0203 10:22:58.628420 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-krnr5" podUID="b2780eb3-7b7a-47fe-bda0-2605419df774" Feb 03 10:22:58 crc kubenswrapper[5010]: E0203 10:22:58.878768 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified\\\"\"" pod="openstack/ovn-controller-ovs-krnr5" podUID="b2780eb3-7b7a-47fe-bda0-2605419df774" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.408132 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.409135 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hrz69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-lkm9t_openstack(05e75df7-a63f-4821-8aa1-79b20fe2e100): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.410456 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" podUID="05e75df7-a63f-4821-8aa1-79b20fe2e100" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.420487 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.420666 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4cjqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-k9cm6_openstack(6fec8d31-6436-4bfa-aae8-154ca2b74cf2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.421906 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" podUID="6fec8d31-6436-4bfa-aae8-154ca2b74cf2" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.465461 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.465651 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64qtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-g56qr_openstack(e75b7259-a771-487b-9d36-990ce8571c11): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.467021 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" podUID="e75b7259-a771-487b-9d36-990ce8571c11" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.957164 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.957715 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7ch64ch98h5fhbch679h649h548h55h5f4h5c8h7fh686h677h5c5h5bh5b7h657h67dh58bh77h68ch76h564h9h5fch5f7hb8h54ch649h98h74q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d7xp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ql6ht_openstack(1883c30e-4c38-468d-a5dc-91b07f167d67): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.959364 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ql6ht" podUID="1883c30e-4c38-468d-a5dc-91b07f167d67" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.995412 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.995587 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29t54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-kpzlc_openstack(86085e66-cdd4-45aa-af20-f8856cdfed1c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.997349 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" podUID="86085e66-cdd4-45aa-af20-f8856cdfed1c" Feb 03 10:23:05 crc kubenswrapper[5010]: E0203 10:23:05.277712 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-ql6ht" podUID="1883c30e-4c38-468d-a5dc-91b07f167d67" Feb 03 10:23:05 crc kubenswrapper[5010]: E0203 10:23:05.277983 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" podUID="86085e66-cdd4-45aa-af20-f8856cdfed1c" Feb 03 10:23:05 crc kubenswrapper[5010]: I0203 10:23:05.310016 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 03 10:23:05 crc kubenswrapper[5010]: I0203 10:23:05.779984 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vqkq5"] Feb 03 10:23:06 crc kubenswrapper[5010]: W0203 10:23:06.068890 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5235b9fc_3723_4d8a_9851_e8ee89c0b084.slice/crio-9996fa3b5dd316c984d433b961f88b86fa6cb581820080df11cf29f09af4b0d6 WatchSource:0}: Error finding container 9996fa3b5dd316c984d433b961f88b86fa6cb581820080df11cf29f09af4b0d6: Status 404 returned error can't find the container with id 9996fa3b5dd316c984d433b961f88b86fa6cb581820080df11cf29f09af4b0d6 Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.125911 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.143640 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64qtv\" (UniqueName: \"kubernetes.io/projected/e75b7259-a771-487b-9d36-990ce8571c11-kube-api-access-64qtv\") pod \"e75b7259-a771-487b-9d36-990ce8571c11\" (UID: \"e75b7259-a771-487b-9d36-990ce8571c11\") " Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.143690 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e75b7259-a771-487b-9d36-990ce8571c11-config\") pod \"e75b7259-a771-487b-9d36-990ce8571c11\" (UID: \"e75b7259-a771-487b-9d36-990ce8571c11\") " Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.143727 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e75b7259-a771-487b-9d36-990ce8571c11-dns-svc\") pod \"e75b7259-a771-487b-9d36-990ce8571c11\" (UID: \"e75b7259-a771-487b-9d36-990ce8571c11\") " Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.144302 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e75b7259-a771-487b-9d36-990ce8571c11-config" (OuterVolumeSpecName: "config") pod "e75b7259-a771-487b-9d36-990ce8571c11" (UID: "e75b7259-a771-487b-9d36-990ce8571c11"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.144323 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e75b7259-a771-487b-9d36-990ce8571c11-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e75b7259-a771-487b-9d36-990ce8571c11" (UID: "e75b7259-a771-487b-9d36-990ce8571c11"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.149527 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e75b7259-a771-487b-9d36-990ce8571c11-kube-api-access-64qtv" (OuterVolumeSpecName: "kube-api-access-64qtv") pod "e75b7259-a771-487b-9d36-990ce8571c11" (UID: "e75b7259-a771-487b-9d36-990ce8571c11"). InnerVolumeSpecName "kube-api-access-64qtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.244767 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64qtv\" (UniqueName: \"kubernetes.io/projected/e75b7259-a771-487b-9d36-990ce8571c11-kube-api-access-64qtv\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.245065 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e75b7259-a771-487b-9d36-990ce8571c11-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.245083 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e75b7259-a771-487b-9d36-990ce8571c11-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.282804 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.282847 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" event={"ID":"e75b7259-a771-487b-9d36-990ce8571c11","Type":"ContainerDied","Data":"474180be2209d7238391d27eab7728591f11004bc751b0c6114b9196608f8e03"} Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.284840 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vqkq5" event={"ID":"5235b9fc-3723-4d8a-9851-e8ee89c0b084","Type":"ContainerStarted","Data":"9996fa3b5dd316c984d433b961f88b86fa6cb581820080df11cf29f09af4b0d6"} Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.285639 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce","Type":"ContainerStarted","Data":"b93c74370db9b0aef0337572f57615b0154fd2eb16769fa4ad2086643a06821a"} Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.346167 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-g56qr"] Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.355649 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-g56qr"] Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.466593 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.468792 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.512987 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e75b7259-a771-487b-9d36-990ce8571c11" path="/var/lib/kubelet/pods/e75b7259-a771-487b-9d36-990ce8571c11/volumes" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.649491 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrz69\" (UniqueName: \"kubernetes.io/projected/05e75df7-a63f-4821-8aa1-79b20fe2e100-kube-api-access-hrz69\") pod \"05e75df7-a63f-4821-8aa1-79b20fe2e100\" (UID: \"05e75df7-a63f-4821-8aa1-79b20fe2e100\") " Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.650672 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05e75df7-a63f-4821-8aa1-79b20fe2e100-config" (OuterVolumeSpecName: "config") pod "05e75df7-a63f-4821-8aa1-79b20fe2e100" (UID: "05e75df7-a63f-4821-8aa1-79b20fe2e100"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.650711 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05e75df7-a63f-4821-8aa1-79b20fe2e100-config\") pod \"05e75df7-a63f-4821-8aa1-79b20fe2e100\" (UID: \"05e75df7-a63f-4821-8aa1-79b20fe2e100\") " Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.650772 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-config\") pod \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\" (UID: \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\") " Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.651404 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-config" (OuterVolumeSpecName: "config") pod "6fec8d31-6436-4bfa-aae8-154ca2b74cf2" (UID: "6fec8d31-6436-4bfa-aae8-154ca2b74cf2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.651482 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-dns-svc\") pod \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\" (UID: \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\") " Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.651530 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cjqt\" (UniqueName: \"kubernetes.io/projected/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-kube-api-access-4cjqt\") pod \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\" (UID: \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\") " Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.652636 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6fec8d31-6436-4bfa-aae8-154ca2b74cf2" (UID: "6fec8d31-6436-4bfa-aae8-154ca2b74cf2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.653964 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.653987 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05e75df7-a63f-4821-8aa1-79b20fe2e100-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.653996 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.655399 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-kube-api-access-4cjqt" (OuterVolumeSpecName: "kube-api-access-4cjqt") pod "6fec8d31-6436-4bfa-aae8-154ca2b74cf2" (UID: "6fec8d31-6436-4bfa-aae8-154ca2b74cf2"). InnerVolumeSpecName "kube-api-access-4cjqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.655446 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05e75df7-a63f-4821-8aa1-79b20fe2e100-kube-api-access-hrz69" (OuterVolumeSpecName: "kube-api-access-hrz69") pod "05e75df7-a63f-4821-8aa1-79b20fe2e100" (UID: "05e75df7-a63f-4821-8aa1-79b20fe2e100"). InnerVolumeSpecName "kube-api-access-hrz69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.755437 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrz69\" (UniqueName: \"kubernetes.io/projected/05e75df7-a63f-4821-8aa1-79b20fe2e100-kube-api-access-hrz69\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.755473 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cjqt\" (UniqueName: \"kubernetes.io/projected/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-kube-api-access-4cjqt\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:07 crc kubenswrapper[5010]: I0203 10:23:07.295964 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:23:07 crc kubenswrapper[5010]: I0203 10:23:07.295989 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" event={"ID":"6fec8d31-6436-4bfa-aae8-154ca2b74cf2","Type":"ContainerDied","Data":"d7f9681b86e8830df0ea7e53a19e40fbea0d9f1b8f5d34f7c2f7074013fa6ad9"} Feb 03 10:23:07 crc kubenswrapper[5010]: I0203 10:23:07.298102 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" event={"ID":"05e75df7-a63f-4821-8aa1-79b20fe2e100","Type":"ContainerDied","Data":"9e3776a5d3f524e0c405d299c28cd32959ccfee9a9abe7e9369d1c2023e2ff59"} Feb 03 10:23:07 crc kubenswrapper[5010]: I0203 10:23:07.298693 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" Feb 03 10:23:07 crc kubenswrapper[5010]: I0203 10:23:07.357011 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k9cm6"] Feb 03 10:23:07 crc kubenswrapper[5010]: I0203 10:23:07.362251 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k9cm6"] Feb 03 10:23:07 crc kubenswrapper[5010]: I0203 10:23:07.381337 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lkm9t"] Feb 03 10:23:07 crc kubenswrapper[5010]: I0203 10:23:07.387268 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lkm9t"] Feb 03 10:23:07 crc kubenswrapper[5010]: E0203 10:23:07.410981 5010 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05e75df7_a63f_4821_8aa1_79b20fe2e100.slice/crio-9e3776a5d3f524e0c405d299c28cd32959ccfee9a9abe7e9369d1c2023e2ff59\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05e75df7_a63f_4821_8aa1_79b20fe2e100.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fec8d31_6436_4bfa_aae8_154ca2b74cf2.slice/crio-d7f9681b86e8830df0ea7e53a19e40fbea0d9f1b8f5d34f7c2f7074013fa6ad9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fec8d31_6436_4bfa_aae8_154ca2b74cf2.slice\": RecentStats: unable to find data in memory cache]" Feb 03 10:23:08 crc kubenswrapper[5010]: I0203 10:23:08.512669 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05e75df7-a63f-4821-8aa1-79b20fe2e100" path="/var/lib/kubelet/pods/05e75df7-a63f-4821-8aa1-79b20fe2e100/volumes" Feb 03 10:23:08 crc kubenswrapper[5010]: I0203 10:23:08.513493 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fec8d31-6436-4bfa-aae8-154ca2b74cf2" path="/var/lib/kubelet/pods/6fec8d31-6436-4bfa-aae8-154ca2b74cf2/volumes" Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.314774 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce","Type":"ContainerStarted","Data":"987e0ad36e6e1c0af04f4ea300830c129ba933216eb5a6c0730fd3baf74641f5"} Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.315166 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce","Type":"ContainerStarted","Data":"26a790849fba98e2c0a6b6980ba93bb6a65ba48773df7fc18dc3719486a99aa6"} Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.320392 5010 generic.go:334] "Generic (PLEG): container finished" podID="3ea6e430-f9a6-4850-b58e-24ac04fd49a2" containerID="dcafe9c15b252f4afce63db43717e61b273dee3af36eabf6852fd51f8f27c930" exitCode=0 Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.320740 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" event={"ID":"3ea6e430-f9a6-4850-b58e-24ac04fd49a2","Type":"ContainerDied","Data":"dcafe9c15b252f4afce63db43717e61b273dee3af36eabf6852fd51f8f27c930"} Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.323565 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vqkq5" event={"ID":"5235b9fc-3723-4d8a-9851-e8ee89c0b084","Type":"ContainerStarted","Data":"d91898ed898aabcde5ef7805055788efffab2baa30ab6b08b03c958818960ece"} Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.328438 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d","Type":"ContainerStarted","Data":"6b569330655568b61d54a1a1c7cb51f6293c0fbdb3c0638d49c43584d6d27ab4"} Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.328499 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d","Type":"ContainerStarted","Data":"1e0c95cdf7c43e6e556f539bafd04b2edd3c565bf3490a19b52e90dc365be45e"} Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.341286 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=28.018237232 podStartE2EDuration="30.341263537s" podCreationTimestamp="2026-02-03 10:22:39 +0000 UTC" firstStartedPulling="2026-02-03 10:23:05.762654883 +0000 UTC m=+1255.918631012" lastFinishedPulling="2026-02-03 10:23:08.085681188 +0000 UTC m=+1258.241657317" observedRunningTime="2026-02-03 10:23:09.340425435 +0000 UTC m=+1259.496401564" watchObservedRunningTime="2026-02-03 10:23:09.341263537 +0000 UTC m=+1259.497239686" Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.372816 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=13.595444352 podStartE2EDuration="33.372791599s" podCreationTimestamp="2026-02-03 10:22:36 +0000 UTC" firstStartedPulling="2026-02-03 10:22:46.638021372 +0000 UTC m=+1236.793997501" lastFinishedPulling="2026-02-03 10:23:06.415368619 +0000 UTC m=+1256.571344748" observedRunningTime="2026-02-03 10:23:09.36932573 +0000 UTC m=+1259.525301869" watchObservedRunningTime="2026-02-03 10:23:09.372791599 +0000 UTC m=+1259.528767748" Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.413658 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-vqkq5" podStartSLOduration=28.334409892 podStartE2EDuration="30.413636352s" podCreationTimestamp="2026-02-03 10:22:39 +0000 UTC" firstStartedPulling="2026-02-03 10:23:06.091368427 +0000 UTC m=+1256.247344556" lastFinishedPulling="2026-02-03 10:23:08.170594887 +0000 UTC m=+1258.326571016" observedRunningTime="2026-02-03 10:23:09.409880495 +0000 UTC m=+1259.565856634" watchObservedRunningTime="2026-02-03 10:23:09.413636352 +0000 UTC m=+1259.569612491" Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.805104 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-kpzlc"] Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.868889 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-bsmfs"] Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.870547 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.876106 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.877042 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-bsmfs"] Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.024701 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.024881 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf9dm\" (UniqueName: \"kubernetes.io/projected/794d29fd-0784-4f8c-8f62-e6753d046def-kube-api-access-hf9dm\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.024935 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.025062 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-config\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.025121 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.126535 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.126618 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.126718 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf9dm\" (UniqueName: \"kubernetes.io/projected/794d29fd-0784-4f8c-8f62-e6753d046def-kube-api-access-hf9dm\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.126742 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.126780 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-config\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.128694 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-config\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.129072 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.129975 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.130196 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.148479 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf9dm\" (UniqueName: \"kubernetes.io/projected/794d29fd-0784-4f8c-8f62-e6753d046def-kube-api-access-hf9dm\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.204105 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.340189 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" event={"ID":"3ea6e430-f9a6-4850-b58e-24ac04fd49a2","Type":"ContainerStarted","Data":"2a39e93057d80e1a2e85ebc3a8a730552d12cf63e0e15cf7d8339a09d27bdab7"} Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.361742 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" podStartSLOduration=20.729328716 podStartE2EDuration="30.361717232s" podCreationTimestamp="2026-02-03 10:22:40 +0000 UTC" firstStartedPulling="2026-02-03 10:22:57.024535699 +0000 UTC m=+1247.180511818" lastFinishedPulling="2026-02-03 10:23:06.656924205 +0000 UTC m=+1256.812900334" observedRunningTime="2026-02-03 10:23:10.359745052 +0000 UTC m=+1260.515721181" watchObservedRunningTime="2026-02-03 10:23:10.361717232 +0000 UTC m=+1260.517693361" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.467380 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.535753 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29t54\" (UniqueName: \"kubernetes.io/projected/86085e66-cdd4-45aa-af20-f8856cdfed1c-kube-api-access-29t54\") pod \"86085e66-cdd4-45aa-af20-f8856cdfed1c\" (UID: \"86085e66-cdd4-45aa-af20-f8856cdfed1c\") " Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.536334 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86085e66-cdd4-45aa-af20-f8856cdfed1c-dns-svc\") pod \"86085e66-cdd4-45aa-af20-f8856cdfed1c\" (UID: \"86085e66-cdd4-45aa-af20-f8856cdfed1c\") " Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.536404 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86085e66-cdd4-45aa-af20-f8856cdfed1c-config\") pod \"86085e66-cdd4-45aa-af20-f8856cdfed1c\" (UID: \"86085e66-cdd4-45aa-af20-f8856cdfed1c\") " Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.540660 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86085e66-cdd4-45aa-af20-f8856cdfed1c-config" (OuterVolumeSpecName: "config") pod "86085e66-cdd4-45aa-af20-f8856cdfed1c" (UID: "86085e66-cdd4-45aa-af20-f8856cdfed1c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.541574 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86085e66-cdd4-45aa-af20-f8856cdfed1c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "86085e66-cdd4-45aa-af20-f8856cdfed1c" (UID: "86085e66-cdd4-45aa-af20-f8856cdfed1c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.591493 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86085e66-cdd4-45aa-af20-f8856cdfed1c-kube-api-access-29t54" (OuterVolumeSpecName: "kube-api-access-29t54") pod "86085e66-cdd4-45aa-af20-f8856cdfed1c" (UID: "86085e66-cdd4-45aa-af20-f8856cdfed1c"). InnerVolumeSpecName "kube-api-access-29t54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.642744 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29t54\" (UniqueName: \"kubernetes.io/projected/86085e66-cdd4-45aa-af20-f8856cdfed1c-kube-api-access-29t54\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.643192 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86085e66-cdd4-45aa-af20-f8856cdfed1c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.643204 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86085e66-cdd4-45aa-af20-f8856cdfed1c-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.646617 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.873819 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-bsmfs"] Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.065453 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.130428 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.349723 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7b0ebfb6-7019-4de6-88df-b2161da95e9b","Type":"ContainerStarted","Data":"8566fd9acbf9b37a7c0e5b8b574fab43fa6c097fb1878bb86a8c41a2e79e2d53"} Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.349857 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.351379 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" event={"ID":"794d29fd-0784-4f8c-8f62-e6753d046def","Type":"ContainerStarted","Data":"098a23dc68ddad3e911c76c8c4f89d48833726d8e974b792fb670b88ee30cae7"} Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.351408 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" event={"ID":"86085e66-cdd4-45aa-af20-f8856cdfed1c","Type":"ContainerDied","Data":"e7f926e73e67c36bc02fcc6793463e0a1d4e2f826cfb6f5739264417666543a5"} Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.354499 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"95adc2d1-1093-484e-8580-53e244b420c8","Type":"ContainerStarted","Data":"54b122c9dd1ed2e74e27738123169f3f9ae6b63c80ddae2d33dd5ab19170ae9c"} Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.355443 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.355563 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.355582 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.373306 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.078888672 podStartE2EDuration="38.37326223s" podCreationTimestamp="2026-02-03 10:22:33 +0000 UTC" firstStartedPulling="2026-02-03 10:22:35.297815846 +0000 UTC m=+1225.453791985" lastFinishedPulling="2026-02-03 10:23:10.592189414 +0000 UTC m=+1260.748165543" observedRunningTime="2026-02-03 10:23:11.371436213 +0000 UTC m=+1261.527412342" watchObservedRunningTime="2026-02-03 10:23:11.37326223 +0000 UTC m=+1261.529238369" Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.391252 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.391340 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.391422 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.726790818 podStartE2EDuration="41.391414018s" podCreationTimestamp="2026-02-03 10:22:30 +0000 UTC" firstStartedPulling="2026-02-03 10:22:32.945117336 +0000 UTC m=+1223.101093455" lastFinishedPulling="2026-02-03 10:23:10.609740526 +0000 UTC m=+1260.765716655" observedRunningTime="2026-02-03 10:23:11.387516497 +0000 UTC m=+1261.543492616" watchObservedRunningTime="2026-02-03 10:23:11.391414018 +0000 UTC m=+1261.547390147" Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.453011 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.640081 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-kpzlc"] Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.647441 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-kpzlc"] Feb 03 10:23:12 crc kubenswrapper[5010]: I0203 10:23:12.396591 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"87eb5dd8-7171-457a-8a95-eda98893319a","Type":"ContainerStarted","Data":"836d10e13031e1e589fd13f0fcda7b9cdf717cf593196a23ab06f2b0deb83c45"} Feb 03 10:23:12 crc kubenswrapper[5010]: I0203 10:23:12.399917 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"449f0b91-9186-4a16-b1b4-7f199b57a428","Type":"ContainerStarted","Data":"d59aa2650c8950a2a4ba7a7dc97b6834e3c2613e89263135debc00e0122c70c1"} Feb 03 10:23:12 crc kubenswrapper[5010]: I0203 10:23:12.402684 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ce83ed2-cbef-4045-8822-6f58268b28b3","Type":"ContainerStarted","Data":"10e7a7e1923769d25869f1642046743d27038f14081a9edd79e0d2a9d1c7d095"} Feb 03 10:23:12 crc kubenswrapper[5010]: I0203 10:23:12.403861 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-krnr5" event={"ID":"b2780eb3-7b7a-47fe-bda0-2605419df774","Type":"ContainerStarted","Data":"70afa1a572760d9bf687091b456c83d50fa5b5467491f14c5c72b196b76b069f"} Feb 03 10:23:12 crc kubenswrapper[5010]: I0203 10:23:12.405285 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f2066c8b-8b89-4dcb-972d-aea4dcd1c105","Type":"ContainerStarted","Data":"35eaa2b360c11ef3168d683fc2f67400b01f08b1d9f58aea46291a308a02faae"} Feb 03 10:23:12 crc kubenswrapper[5010]: I0203 10:23:12.427049 5010 generic.go:334] "Generic (PLEG): container finished" podID="794d29fd-0784-4f8c-8f62-e6753d046def" containerID="ad60373bd6b641bdb33a2fff90fcd46aff2b4465391eb58f2ee5896ab0a4f83b" exitCode=0 Feb 03 10:23:12 crc kubenswrapper[5010]: I0203 10:23:12.429941 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" event={"ID":"794d29fd-0784-4f8c-8f62-e6753d046def","Type":"ContainerDied","Data":"ad60373bd6b641bdb33a2fff90fcd46aff2b4465391eb58f2ee5896ab0a4f83b"} Feb 03 10:23:12 crc kubenswrapper[5010]: I0203 10:23:12.549819 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86085e66-cdd4-45aa-af20-f8856cdfed1c" path="/var/lib/kubelet/pods/86085e66-cdd4-45aa-af20-f8856cdfed1c/volumes" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.111810 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.437021 5010 generic.go:334] "Generic (PLEG): container finished" podID="b2780eb3-7b7a-47fe-bda0-2605419df774" containerID="70afa1a572760d9bf687091b456c83d50fa5b5467491f14c5c72b196b76b069f" exitCode=0 Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.437129 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-krnr5" event={"ID":"b2780eb3-7b7a-47fe-bda0-2605419df774","Type":"ContainerDied","Data":"70afa1a572760d9bf687091b456c83d50fa5b5467491f14c5c72b196b76b069f"} Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.491039 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.739784 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.741674 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.746660 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.747403 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.748119 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-kv5g5" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.748368 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.770665 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.831727 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvzn5\" (UniqueName: \"kubernetes.io/projected/5158e153-9918-4fce-8f2f-75a87b96562b-kube-api-access-bvzn5\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.832099 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5158e153-9918-4fce-8f2f-75a87b96562b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.832663 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5158e153-9918-4fce-8f2f-75a87b96562b-scripts\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.832818 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5158e153-9918-4fce-8f2f-75a87b96562b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.832947 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5158e153-9918-4fce-8f2f-75a87b96562b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.833122 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5158e153-9918-4fce-8f2f-75a87b96562b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.833260 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5158e153-9918-4fce-8f2f-75a87b96562b-config\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.935257 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5158e153-9918-4fce-8f2f-75a87b96562b-scripts\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.935331 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5158e153-9918-4fce-8f2f-75a87b96562b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.935378 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5158e153-9918-4fce-8f2f-75a87b96562b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.935470 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5158e153-9918-4fce-8f2f-75a87b96562b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.935496 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5158e153-9918-4fce-8f2f-75a87b96562b-config\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.935530 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvzn5\" (UniqueName: \"kubernetes.io/projected/5158e153-9918-4fce-8f2f-75a87b96562b-kube-api-access-bvzn5\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.935552 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5158e153-9918-4fce-8f2f-75a87b96562b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:14 crc kubenswrapper[5010]: I0203 10:23:14.022786 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5158e153-9918-4fce-8f2f-75a87b96562b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:14 crc kubenswrapper[5010]: I0203 10:23:14.022830 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5158e153-9918-4fce-8f2f-75a87b96562b-scripts\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:14 crc kubenswrapper[5010]: I0203 10:23:14.023322 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5158e153-9918-4fce-8f2f-75a87b96562b-config\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:14 crc kubenswrapper[5010]: I0203 10:23:14.029638 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5158e153-9918-4fce-8f2f-75a87b96562b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:14 crc kubenswrapper[5010]: I0203 10:23:14.030552 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvzn5\" (UniqueName: \"kubernetes.io/projected/5158e153-9918-4fce-8f2f-75a87b96562b-kube-api-access-bvzn5\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:14 crc kubenswrapper[5010]: I0203 10:23:14.039058 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5158e153-9918-4fce-8f2f-75a87b96562b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:14 crc kubenswrapper[5010]: I0203 10:23:14.044121 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5158e153-9918-4fce-8f2f-75a87b96562b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:14 crc kubenswrapper[5010]: I0203 10:23:14.129955 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 03 10:23:14 crc kubenswrapper[5010]: I0203 10:23:14.965064 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 03 10:23:15 crc kubenswrapper[5010]: I0203 10:23:15.475661 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5158e153-9918-4fce-8f2f-75a87b96562b","Type":"ContainerStarted","Data":"f96432e5c31a342fc4cc5216702bc5bb620c8632cf8e19b7798bd166fe95782b"} Feb 03 10:23:15 crc kubenswrapper[5010]: I0203 10:23:15.647383 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:23:16 crc kubenswrapper[5010]: I0203 10:23:16.462737 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 03 10:23:17 crc kubenswrapper[5010]: I0203 10:23:17.495658 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" event={"ID":"794d29fd-0784-4f8c-8f62-e6753d046def","Type":"ContainerStarted","Data":"a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee"} Feb 03 10:23:18 crc kubenswrapper[5010]: I0203 10:23:18.515769 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-krnr5" event={"ID":"b2780eb3-7b7a-47fe-bda0-2605419df774","Type":"ContainerStarted","Data":"9d2da86adba088a1f50922b1889760440b7d60b277c4fb7e78d3c58c7765ecc8"} Feb 03 10:23:19 crc kubenswrapper[5010]: I0203 10:23:19.538262 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-krnr5" event={"ID":"b2780eb3-7b7a-47fe-bda0-2605419df774","Type":"ContainerStarted","Data":"81b70fba5bf3aa691c2b035a1c743357c9f13960700d426f13526599108aa833"} Feb 03 10:23:19 crc kubenswrapper[5010]: I0203 10:23:19.538754 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:19 crc kubenswrapper[5010]: I0203 10:23:19.565983 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-krnr5" podStartSLOduration=18.284626586999998 podStartE2EDuration="43.565961832s" podCreationTimestamp="2026-02-03 10:22:36 +0000 UTC" firstStartedPulling="2026-02-03 10:22:46.637961741 +0000 UTC m=+1236.793937860" lastFinishedPulling="2026-02-03 10:23:11.919296976 +0000 UTC m=+1262.075273105" observedRunningTime="2026-02-03 10:23:19.562920054 +0000 UTC m=+1269.718896193" watchObservedRunningTime="2026-02-03 10:23:19.565961832 +0000 UTC m=+1269.721937961" Feb 03 10:23:19 crc kubenswrapper[5010]: I0203 10:23:19.586139 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" podStartSLOduration=10.586116852 podStartE2EDuration="10.586116852s" podCreationTimestamp="2026-02-03 10:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:23:19.578051934 +0000 UTC m=+1269.734028063" watchObservedRunningTime="2026-02-03 10:23:19.586116852 +0000 UTC m=+1269.742092991" Feb 03 10:23:20 crc kubenswrapper[5010]: I0203 10:23:20.548796 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5158e153-9918-4fce-8f2f-75a87b96562b","Type":"ContainerStarted","Data":"f0d988c7c6bfff8238bc3a032a99f23a54a66a18d264bfaa3fc707cba7ce94d0"} Feb 03 10:23:20 crc kubenswrapper[5010]: I0203 10:23:20.549618 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:23:20 crc kubenswrapper[5010]: I0203 10:23:20.549638 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5158e153-9918-4fce-8f2f-75a87b96562b","Type":"ContainerStarted","Data":"cf5c0a250f0c0d83ecc5d50cfadc68dcae7bff5a6f435d8df4e66a85d0d0825b"} Feb 03 10:23:20 crc kubenswrapper[5010]: I0203 10:23:20.549654 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:23:20 crc kubenswrapper[5010]: I0203 10:23:20.549797 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 03 10:23:20 crc kubenswrapper[5010]: I0203 10:23:20.568041 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.920256399 podStartE2EDuration="7.568026485s" podCreationTimestamp="2026-02-03 10:23:13 +0000 UTC" firstStartedPulling="2026-02-03 10:23:14.981763955 +0000 UTC m=+1265.137740084" lastFinishedPulling="2026-02-03 10:23:19.629534031 +0000 UTC m=+1269.785510170" observedRunningTime="2026-02-03 10:23:20.56475233 +0000 UTC m=+1270.720728459" watchObservedRunningTime="2026-02-03 10:23:20.568026485 +0000 UTC m=+1270.724002614" Feb 03 10:23:21 crc kubenswrapper[5010]: I0203 10:23:21.556542 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ql6ht" event={"ID":"1883c30e-4c38-468d-a5dc-91b07f167d67","Type":"ContainerStarted","Data":"5061f0de98754bb7f6cbb3fd8c116e1df2bc405232c5873037db4f0594aacf56"} Feb 03 10:23:21 crc kubenswrapper[5010]: I0203 10:23:21.557150 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ql6ht" Feb 03 10:23:21 crc kubenswrapper[5010]: I0203 10:23:21.578715 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ql6ht" podStartSLOduration=2.407311167 podStartE2EDuration="45.578698379s" podCreationTimestamp="2026-02-03 10:22:36 +0000 UTC" firstStartedPulling="2026-02-03 10:22:37.855019019 +0000 UTC m=+1228.010995148" lastFinishedPulling="2026-02-03 10:23:21.026406231 +0000 UTC m=+1271.182382360" observedRunningTime="2026-02-03 10:23:21.576453601 +0000 UTC m=+1271.732429740" watchObservedRunningTime="2026-02-03 10:23:21.578698379 +0000 UTC m=+1271.734674518" Feb 03 10:23:22 crc kubenswrapper[5010]: I0203 10:23:22.565858 5010 generic.go:334] "Generic (PLEG): container finished" podID="87eb5dd8-7171-457a-8a95-eda98893319a" containerID="836d10e13031e1e589fd13f0fcda7b9cdf717cf593196a23ab06f2b0deb83c45" exitCode=0 Feb 03 10:23:22 crc kubenswrapper[5010]: I0203 10:23:22.565931 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"87eb5dd8-7171-457a-8a95-eda98893319a","Type":"ContainerDied","Data":"836d10e13031e1e589fd13f0fcda7b9cdf717cf593196a23ab06f2b0deb83c45"} Feb 03 10:23:22 crc kubenswrapper[5010]: I0203 10:23:22.568202 5010 generic.go:334] "Generic (PLEG): container finished" podID="449f0b91-9186-4a16-b1b4-7f199b57a428" containerID="d59aa2650c8950a2a4ba7a7dc97b6834e3c2613e89263135debc00e0122c70c1" exitCode=0 Feb 03 10:23:22 crc kubenswrapper[5010]: I0203 10:23:22.568469 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"449f0b91-9186-4a16-b1b4-7f199b57a428","Type":"ContainerDied","Data":"d59aa2650c8950a2a4ba7a7dc97b6834e3c2613e89263135debc00e0122c70c1"} Feb 03 10:23:23 crc kubenswrapper[5010]: I0203 10:23:23.578430 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"449f0b91-9186-4a16-b1b4-7f199b57a428","Type":"ContainerStarted","Data":"33659d6b8b33f4fa83e7ee3b9ea84cc7b2b68df78d0c3e36845ed0496dbd20ef"} Feb 03 10:23:23 crc kubenswrapper[5010]: I0203 10:23:23.579968 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"87eb5dd8-7171-457a-8a95-eda98893319a","Type":"ContainerStarted","Data":"a167c7205f060547e741424f98d3ed27bfb2810e1d67d3b95c95dc0aa8fcf4d7"} Feb 03 10:23:23 crc kubenswrapper[5010]: I0203 10:23:23.601743 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=16.070981453 podStartE2EDuration="56.601726732s" podCreationTimestamp="2026-02-03 10:22:27 +0000 UTC" firstStartedPulling="2026-02-03 10:22:30.901557893 +0000 UTC m=+1221.057534022" lastFinishedPulling="2026-02-03 10:23:11.432303172 +0000 UTC m=+1261.588279301" observedRunningTime="2026-02-03 10:23:23.599597107 +0000 UTC m=+1273.755573276" watchObservedRunningTime="2026-02-03 10:23:23.601726732 +0000 UTC m=+1273.757702861" Feb 03 10:23:23 crc kubenswrapper[5010]: I0203 10:23:23.625697 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=16.199340962 podStartE2EDuration="54.625673559s" podCreationTimestamp="2026-02-03 10:22:29 +0000 UTC" firstStartedPulling="2026-02-03 10:22:32.619633455 +0000 UTC m=+1222.775609584" lastFinishedPulling="2026-02-03 10:23:11.045966052 +0000 UTC m=+1261.201942181" observedRunningTime="2026-02-03 10:23:23.624044327 +0000 UTC m=+1273.780020486" watchObservedRunningTime="2026-02-03 10:23:23.625673559 +0000 UTC m=+1273.781649728" Feb 03 10:23:23 crc kubenswrapper[5010]: I0203 10:23:23.847686 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-bsmfs"] Feb 03 10:23:23 crc kubenswrapper[5010]: I0203 10:23:23.848026 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" podUID="794d29fd-0784-4f8c-8f62-e6753d046def" containerName="dnsmasq-dns" containerID="cri-o://a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee" gracePeriod=10 Feb 03 10:23:23 crc kubenswrapper[5010]: I0203 10:23:23.849382 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:23 crc kubenswrapper[5010]: I0203 10:23:23.894356 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-c5kgf"] Feb 03 10:23:23 crc kubenswrapper[5010]: I0203 10:23:23.896336 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:23 crc kubenswrapper[5010]: I0203 10:23:23.904520 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-c5kgf"] Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.050758 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5td8\" (UniqueName: \"kubernetes.io/projected/44cce4a6-14dd-4b2d-9473-49edee803476-kube-api-access-s5td8\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.050832 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-dns-svc\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.050862 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.050984 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-config\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.051066 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.118662 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.155407 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5td8\" (UniqueName: \"kubernetes.io/projected/44cce4a6-14dd-4b2d-9473-49edee803476-kube-api-access-s5td8\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.155474 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-dns-svc\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.155498 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.156446 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-config\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.156504 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.156636 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-dns-svc\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.156637 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.157699 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-config\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.157737 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.175669 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5td8\" (UniqueName: \"kubernetes.io/projected/44cce4a6-14dd-4b2d-9473-49edee803476-kube-api-access-s5td8\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.310715 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.420665 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.562664 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-ovsdbserver-sb\") pod \"794d29fd-0784-4f8c-8f62-e6753d046def\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.562779 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-config\") pod \"794d29fd-0784-4f8c-8f62-e6753d046def\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.562806 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-ovsdbserver-nb\") pod \"794d29fd-0784-4f8c-8f62-e6753d046def\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.562911 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-dns-svc\") pod \"794d29fd-0784-4f8c-8f62-e6753d046def\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.562996 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf9dm\" (UniqueName: \"kubernetes.io/projected/794d29fd-0784-4f8c-8f62-e6753d046def-kube-api-access-hf9dm\") pod \"794d29fd-0784-4f8c-8f62-e6753d046def\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.569425 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/794d29fd-0784-4f8c-8f62-e6753d046def-kube-api-access-hf9dm" (OuterVolumeSpecName: "kube-api-access-hf9dm") pod "794d29fd-0784-4f8c-8f62-e6753d046def" (UID: "794d29fd-0784-4f8c-8f62-e6753d046def"). InnerVolumeSpecName "kube-api-access-hf9dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.590508 5010 generic.go:334] "Generic (PLEG): container finished" podID="794d29fd-0784-4f8c-8f62-e6753d046def" containerID="a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee" exitCode=0 Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.590556 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" event={"ID":"794d29fd-0784-4f8c-8f62-e6753d046def","Type":"ContainerDied","Data":"a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee"} Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.590608 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" event={"ID":"794d29fd-0784-4f8c-8f62-e6753d046def","Type":"ContainerDied","Data":"098a23dc68ddad3e911c76c8c4f89d48833726d8e974b792fb670b88ee30cae7"} Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.590629 5010 scope.go:117] "RemoveContainer" containerID="a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.590681 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.607485 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "794d29fd-0784-4f8c-8f62-e6753d046def" (UID: "794d29fd-0784-4f8c-8f62-e6753d046def"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.609768 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "794d29fd-0784-4f8c-8f62-e6753d046def" (UID: "794d29fd-0784-4f8c-8f62-e6753d046def"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.610499 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-config" (OuterVolumeSpecName: "config") pod "794d29fd-0784-4f8c-8f62-e6753d046def" (UID: "794d29fd-0784-4f8c-8f62-e6753d046def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.616615 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "794d29fd-0784-4f8c-8f62-e6753d046def" (UID: "794d29fd-0784-4f8c-8f62-e6753d046def"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.623070 5010 scope.go:117] "RemoveContainer" containerID="ad60373bd6b641bdb33a2fff90fcd46aff2b4465391eb58f2ee5896ab0a4f83b" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.643644 5010 scope.go:117] "RemoveContainer" containerID="a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee" Feb 03 10:23:24 crc kubenswrapper[5010]: E0203 10:23:24.645249 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee\": container with ID starting with a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee not found: ID does not exist" containerID="a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.645306 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee"} err="failed to get container status \"a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee\": rpc error: code = NotFound desc = could not find container \"a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee\": container with ID starting with a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee not found: ID does not exist" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.645335 5010 scope.go:117] "RemoveContainer" containerID="ad60373bd6b641bdb33a2fff90fcd46aff2b4465391eb58f2ee5896ab0a4f83b" Feb 03 10:23:24 crc kubenswrapper[5010]: E0203 10:23:24.645884 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad60373bd6b641bdb33a2fff90fcd46aff2b4465391eb58f2ee5896ab0a4f83b\": container with ID starting with ad60373bd6b641bdb33a2fff90fcd46aff2b4465391eb58f2ee5896ab0a4f83b not found: ID does not exist" containerID="ad60373bd6b641bdb33a2fff90fcd46aff2b4465391eb58f2ee5896ab0a4f83b" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.645907 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad60373bd6b641bdb33a2fff90fcd46aff2b4465391eb58f2ee5896ab0a4f83b"} err="failed to get container status \"ad60373bd6b641bdb33a2fff90fcd46aff2b4465391eb58f2ee5896ab0a4f83b\": rpc error: code = NotFound desc = could not find container \"ad60373bd6b641bdb33a2fff90fcd46aff2b4465391eb58f2ee5896ab0a4f83b\": container with ID starting with ad60373bd6b641bdb33a2fff90fcd46aff2b4465391eb58f2ee5896ab0a4f83b not found: ID does not exist" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.664755 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf9dm\" (UniqueName: \"kubernetes.io/projected/794d29fd-0784-4f8c-8f62-e6753d046def-kube-api-access-hf9dm\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.664794 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.664810 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.664822 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.664832 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.765723 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-c5kgf"] Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.930436 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-bsmfs"] Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.936521 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-bsmfs"] Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.975850 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 03 10:23:24 crc kubenswrapper[5010]: E0203 10:23:24.976179 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="794d29fd-0784-4f8c-8f62-e6753d046def" containerName="init" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.976194 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="794d29fd-0784-4f8c-8f62-e6753d046def" containerName="init" Feb 03 10:23:24 crc kubenswrapper[5010]: E0203 10:23:24.980044 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="794d29fd-0784-4f8c-8f62-e6753d046def" containerName="dnsmasq-dns" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.980077 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="794d29fd-0784-4f8c-8f62-e6753d046def" containerName="dnsmasq-dns" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.980438 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="794d29fd-0784-4f8c-8f62-e6753d046def" containerName="dnsmasq-dns" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.987477 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.990831 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-z59t4" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.991019 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.991127 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.991273 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.994196 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.078492 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4b58c504-f707-43fe-91ca-4328c58e998c-lock\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.078541 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.078561 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.078603 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4b58c504-f707-43fe-91ca-4328c58e998c-cache\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.078634 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp84n\" (UniqueName: \"kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-kube-api-access-wp84n\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.078662 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b58c504-f707-43fe-91ca-4328c58e998c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.180475 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4b58c504-f707-43fe-91ca-4328c58e998c-cache\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.180574 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp84n\" (UniqueName: \"kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-kube-api-access-wp84n\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.180646 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b58c504-f707-43fe-91ca-4328c58e998c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.180763 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4b58c504-f707-43fe-91ca-4328c58e998c-lock\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.180809 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.180843 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: E0203 10:23:25.180986 5010 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 10:23:25 crc kubenswrapper[5010]: E0203 10:23:25.181015 5010 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 10:23:25 crc kubenswrapper[5010]: E0203 10:23:25.181066 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift podName:4b58c504-f707-43fe-91ca-4328c58e998c nodeName:}" failed. No retries permitted until 2026-02-03 10:23:25.681047205 +0000 UTC m=+1275.837023334 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift") pod "swift-storage-0" (UID: "4b58c504-f707-43fe-91ca-4328c58e998c") : configmap "swift-ring-files" not found Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.181352 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.181441 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4b58c504-f707-43fe-91ca-4328c58e998c-lock\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.181592 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4b58c504-f707-43fe-91ca-4328c58e998c-cache\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.204517 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b58c504-f707-43fe-91ca-4328c58e998c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.207001 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp84n\" (UniqueName: \"kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-kube-api-access-wp84n\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.238399 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.599669 5010 generic.go:334] "Generic (PLEG): container finished" podID="44cce4a6-14dd-4b2d-9473-49edee803476" containerID="3c57d1f02480e226663bd51d322aaf3512d8cb461ee5df04050137b40a4bc8cf" exitCode=0 Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.599749 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-c5kgf" event={"ID":"44cce4a6-14dd-4b2d-9473-49edee803476","Type":"ContainerDied","Data":"3c57d1f02480e226663bd51d322aaf3512d8cb461ee5df04050137b40a4bc8cf"} Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.599778 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-c5kgf" event={"ID":"44cce4a6-14dd-4b2d-9473-49edee803476","Type":"ContainerStarted","Data":"7b4cc9746175c611db5edf3a8b25a3610c6d4de7b21e5812358190938f2ecfc7"} Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.690359 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: E0203 10:23:25.690912 5010 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 10:23:25 crc kubenswrapper[5010]: E0203 10:23:25.691051 5010 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 10:23:25 crc kubenswrapper[5010]: E0203 10:23:25.691269 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift podName:4b58c504-f707-43fe-91ca-4328c58e998c nodeName:}" failed. No retries permitted until 2026-02-03 10:23:26.691241718 +0000 UTC m=+1276.847217847 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift") pod "swift-storage-0" (UID: "4b58c504-f707-43fe-91ca-4328c58e998c") : configmap "swift-ring-files" not found Feb 03 10:23:26 crc kubenswrapper[5010]: I0203 10:23:26.511946 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="794d29fd-0784-4f8c-8f62-e6753d046def" path="/var/lib/kubelet/pods/794d29fd-0784-4f8c-8f62-e6753d046def/volumes" Feb 03 10:23:26 crc kubenswrapper[5010]: I0203 10:23:26.613287 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-c5kgf" event={"ID":"44cce4a6-14dd-4b2d-9473-49edee803476","Type":"ContainerStarted","Data":"f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98"} Feb 03 10:23:26 crc kubenswrapper[5010]: I0203 10:23:26.613452 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:26 crc kubenswrapper[5010]: I0203 10:23:26.636900 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-c5kgf" podStartSLOduration=3.636879476 podStartE2EDuration="3.636879476s" podCreationTimestamp="2026-02-03 10:23:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:23:26.628389007 +0000 UTC m=+1276.784365156" watchObservedRunningTime="2026-02-03 10:23:26.636879476 +0000 UTC m=+1276.792855605" Feb 03 10:23:26 crc kubenswrapper[5010]: I0203 10:23:26.707581 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:26 crc kubenswrapper[5010]: E0203 10:23:26.707749 5010 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 10:23:26 crc kubenswrapper[5010]: E0203 10:23:26.707782 5010 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 10:23:26 crc kubenswrapper[5010]: E0203 10:23:26.707847 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift podName:4b58c504-f707-43fe-91ca-4328c58e998c nodeName:}" failed. No retries permitted until 2026-02-03 10:23:28.707825725 +0000 UTC m=+1278.863801914 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift") pod "swift-storage-0" (UID: "4b58c504-f707-43fe-91ca-4328c58e998c") : configmap "swift-ring-files" not found Feb 03 10:23:28 crc kubenswrapper[5010]: I0203 10:23:28.740301 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:28 crc kubenswrapper[5010]: E0203 10:23:28.740483 5010 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 10:23:28 crc kubenswrapper[5010]: E0203 10:23:28.740708 5010 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 10:23:28 crc kubenswrapper[5010]: E0203 10:23:28.740759 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift podName:4b58c504-f707-43fe-91ca-4328c58e998c nodeName:}" failed. No retries permitted until 2026-02-03 10:23:32.740742482 +0000 UTC m=+1282.896718611 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift") pod "swift-storage-0" (UID: "4b58c504-f707-43fe-91ca-4328c58e998c") : configmap "swift-ring-files" not found Feb 03 10:23:28 crc kubenswrapper[5010]: I0203 10:23:28.934927 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-n8qtn"] Feb 03 10:23:28 crc kubenswrapper[5010]: I0203 10:23:28.936040 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:28 crc kubenswrapper[5010]: I0203 10:23:28.938097 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 03 10:23:28 crc kubenswrapper[5010]: I0203 10:23:28.938282 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 03 10:23:28 crc kubenswrapper[5010]: I0203 10:23:28.938492 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 03 10:23:28 crc kubenswrapper[5010]: I0203 10:23:28.946410 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-n8qtn"] Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.045911 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-combined-ca-bundle\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.045964 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-swiftconf\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.046035 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-dispersionconf\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.046068 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-etc-swift\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.046085 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-ring-data-devices\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.046312 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-scripts\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.046394 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7n9j\" (UniqueName: \"kubernetes.io/projected/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-kube-api-access-c7n9j\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.147953 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-dispersionconf\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.148017 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-etc-swift\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.148044 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-ring-data-devices\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.148106 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-scripts\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.148145 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7n9j\" (UniqueName: \"kubernetes.io/projected/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-kube-api-access-c7n9j\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.148184 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-combined-ca-bundle\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.148234 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-swiftconf\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.149108 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-etc-swift\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.149448 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-scripts\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.149856 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-ring-data-devices\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.153586 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-combined-ca-bundle\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.153734 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-swiftconf\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.160066 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-dispersionconf\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.166628 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7n9j\" (UniqueName: \"kubernetes.io/projected/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-kube-api-access-c7n9j\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.291126 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.562560 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.562852 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.656790 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.735018 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.752444 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-n8qtn"] Feb 03 10:23:30 crc kubenswrapper[5010]: I0203 10:23:30.642673 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-n8qtn" event={"ID":"65c9ffaf-83e3-47c1-a1e8-b097b371ccec","Type":"ContainerStarted","Data":"05528d7b25b91ddd2d6931ebb207234211817db001ec48df5c320eaf05808c38"} Feb 03 10:23:30 crc kubenswrapper[5010]: I0203 10:23:30.850985 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-caa6-account-create-update-69sjp"] Feb 03 10:23:30 crc kubenswrapper[5010]: I0203 10:23:30.853782 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-caa6-account-create-update-69sjp" Feb 03 10:23:30 crc kubenswrapper[5010]: I0203 10:23:30.858488 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 03 10:23:30 crc kubenswrapper[5010]: I0203 10:23:30.875498 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-caa6-account-create-update-69sjp"] Feb 03 10:23:30 crc kubenswrapper[5010]: I0203 10:23:30.960055 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-nh655"] Feb 03 10:23:30 crc kubenswrapper[5010]: I0203 10:23:30.961412 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nh655" Feb 03 10:23:30 crc kubenswrapper[5010]: I0203 10:23:30.972458 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nh655"] Feb 03 10:23:30 crc kubenswrapper[5010]: I0203 10:23:30.981068 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a6faff8-cfd9-4253-8dc3-d3df2b3252be-operator-scripts\") pod \"keystone-caa6-account-create-update-69sjp\" (UID: \"9a6faff8-cfd9-4253-8dc3-d3df2b3252be\") " pod="openstack/keystone-caa6-account-create-update-69sjp" Feb 03 10:23:30 crc kubenswrapper[5010]: I0203 10:23:30.981415 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4bdc\" (UniqueName: \"kubernetes.io/projected/9a6faff8-cfd9-4253-8dc3-d3df2b3252be-kube-api-access-v4bdc\") pod \"keystone-caa6-account-create-update-69sjp\" (UID: \"9a6faff8-cfd9-4253-8dc3-d3df2b3252be\") " pod="openstack/keystone-caa6-account-create-update-69sjp" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.065818 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-9qjk8"] Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.067022 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9qjk8" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.076288 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9qjk8"] Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.082916 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4bdc\" (UniqueName: \"kubernetes.io/projected/9a6faff8-cfd9-4253-8dc3-d3df2b3252be-kube-api-access-v4bdc\") pod \"keystone-caa6-account-create-update-69sjp\" (UID: \"9a6faff8-cfd9-4253-8dc3-d3df2b3252be\") " pod="openstack/keystone-caa6-account-create-update-69sjp" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.083038 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cf6f6f7-d993-486c-9dcf-63d6b298f898-operator-scripts\") pod \"keystone-db-create-nh655\" (UID: \"7cf6f6f7-d993-486c-9dcf-63d6b298f898\") " pod="openstack/keystone-db-create-nh655" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.083101 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a6faff8-cfd9-4253-8dc3-d3df2b3252be-operator-scripts\") pod \"keystone-caa6-account-create-update-69sjp\" (UID: \"9a6faff8-cfd9-4253-8dc3-d3df2b3252be\") " pod="openstack/keystone-caa6-account-create-update-69sjp" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.083199 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z462c\" (UniqueName: \"kubernetes.io/projected/7cf6f6f7-d993-486c-9dcf-63d6b298f898-kube-api-access-z462c\") pod \"keystone-db-create-nh655\" (UID: \"7cf6f6f7-d993-486c-9dcf-63d6b298f898\") " pod="openstack/keystone-db-create-nh655" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.087059 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a6faff8-cfd9-4253-8dc3-d3df2b3252be-operator-scripts\") pod \"keystone-caa6-account-create-update-69sjp\" (UID: \"9a6faff8-cfd9-4253-8dc3-d3df2b3252be\") " pod="openstack/keystone-caa6-account-create-update-69sjp" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.119063 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4bdc\" (UniqueName: \"kubernetes.io/projected/9a6faff8-cfd9-4253-8dc3-d3df2b3252be-kube-api-access-v4bdc\") pod \"keystone-caa6-account-create-update-69sjp\" (UID: \"9a6faff8-cfd9-4253-8dc3-d3df2b3252be\") " pod="openstack/keystone-caa6-account-create-update-69sjp" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.183232 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-3037-account-create-update-847d2"] Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.184267 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3037-account-create-update-847d2" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.184591 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-caa6-account-create-update-69sjp" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.186117 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.187635 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996-operator-scripts\") pod \"placement-db-create-9qjk8\" (UID: \"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996\") " pod="openstack/placement-db-create-9qjk8" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.187725 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z462c\" (UniqueName: \"kubernetes.io/projected/7cf6f6f7-d993-486c-9dcf-63d6b298f898-kube-api-access-z462c\") pod \"keystone-db-create-nh655\" (UID: \"7cf6f6f7-d993-486c-9dcf-63d6b298f898\") " pod="openstack/keystone-db-create-nh655" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.187811 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cf6f6f7-d993-486c-9dcf-63d6b298f898-operator-scripts\") pod \"keystone-db-create-nh655\" (UID: \"7cf6f6f7-d993-486c-9dcf-63d6b298f898\") " pod="openstack/keystone-db-create-nh655" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.187842 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxnpk\" (UniqueName: \"kubernetes.io/projected/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996-kube-api-access-cxnpk\") pod \"placement-db-create-9qjk8\" (UID: \"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996\") " pod="openstack/placement-db-create-9qjk8" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.188708 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cf6f6f7-d993-486c-9dcf-63d6b298f898-operator-scripts\") pod \"keystone-db-create-nh655\" (UID: \"7cf6f6f7-d993-486c-9dcf-63d6b298f898\") " pod="openstack/keystone-db-create-nh655" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.197822 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3037-account-create-update-847d2"] Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.200629 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.200715 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.206817 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z462c\" (UniqueName: \"kubernetes.io/projected/7cf6f6f7-d993-486c-9dcf-63d6b298f898-kube-api-access-z462c\") pod \"keystone-db-create-nh655\" (UID: \"7cf6f6f7-d993-486c-9dcf-63d6b298f898\") " pod="openstack/keystone-db-create-nh655" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.286152 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nh655" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.289682 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxnpk\" (UniqueName: \"kubernetes.io/projected/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996-kube-api-access-cxnpk\") pod \"placement-db-create-9qjk8\" (UID: \"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996\") " pod="openstack/placement-db-create-9qjk8" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.290597 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996-operator-scripts\") pod \"placement-db-create-9qjk8\" (UID: \"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996\") " pod="openstack/placement-db-create-9qjk8" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.290672 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn6zx\" (UniqueName: \"kubernetes.io/projected/9e03bfed-c1c6-4165-86c0-6c1415a30081-kube-api-access-zn6zx\") pod \"placement-3037-account-create-update-847d2\" (UID: \"9e03bfed-c1c6-4165-86c0-6c1415a30081\") " pod="openstack/placement-3037-account-create-update-847d2" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.290737 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e03bfed-c1c6-4165-86c0-6c1415a30081-operator-scripts\") pod \"placement-3037-account-create-update-847d2\" (UID: \"9e03bfed-c1c6-4165-86c0-6c1415a30081\") " pod="openstack/placement-3037-account-create-update-847d2" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.291154 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996-operator-scripts\") pod \"placement-db-create-9qjk8\" (UID: \"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996\") " pod="openstack/placement-db-create-9qjk8" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.307071 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxnpk\" (UniqueName: \"kubernetes.io/projected/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996-kube-api-access-cxnpk\") pod \"placement-db-create-9qjk8\" (UID: \"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996\") " pod="openstack/placement-db-create-9qjk8" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.321091 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.392229 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn6zx\" (UniqueName: \"kubernetes.io/projected/9e03bfed-c1c6-4165-86c0-6c1415a30081-kube-api-access-zn6zx\") pod \"placement-3037-account-create-update-847d2\" (UID: \"9e03bfed-c1c6-4165-86c0-6c1415a30081\") " pod="openstack/placement-3037-account-create-update-847d2" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.392287 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e03bfed-c1c6-4165-86c0-6c1415a30081-operator-scripts\") pod \"placement-3037-account-create-update-847d2\" (UID: \"9e03bfed-c1c6-4165-86c0-6c1415a30081\") " pod="openstack/placement-3037-account-create-update-847d2" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.393086 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e03bfed-c1c6-4165-86c0-6c1415a30081-operator-scripts\") pod \"placement-3037-account-create-update-847d2\" (UID: \"9e03bfed-c1c6-4165-86c0-6c1415a30081\") " pod="openstack/placement-3037-account-create-update-847d2" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.394631 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9qjk8" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.408266 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn6zx\" (UniqueName: \"kubernetes.io/projected/9e03bfed-c1c6-4165-86c0-6c1415a30081-kube-api-access-zn6zx\") pod \"placement-3037-account-create-update-847d2\" (UID: \"9e03bfed-c1c6-4165-86c0-6c1415a30081\") " pod="openstack/placement-3037-account-create-update-847d2" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.688338 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3037-account-create-update-847d2" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.826680 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 03 10:23:32 crc kubenswrapper[5010]: I0203 10:23:32.170434 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nh655"] Feb 03 10:23:32 crc kubenswrapper[5010]: I0203 10:23:32.177937 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9qjk8"] Feb 03 10:23:32 crc kubenswrapper[5010]: I0203 10:23:32.345281 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-caa6-account-create-update-69sjp"] Feb 03 10:23:32 crc kubenswrapper[5010]: I0203 10:23:32.821474 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:32 crc kubenswrapper[5010]: E0203 10:23:32.821708 5010 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 10:23:32 crc kubenswrapper[5010]: E0203 10:23:32.821737 5010 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 10:23:32 crc kubenswrapper[5010]: E0203 10:23:32.821798 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift podName:4b58c504-f707-43fe-91ca-4328c58e998c nodeName:}" failed. No retries permitted until 2026-02-03 10:23:40.821780778 +0000 UTC m=+1290.977756907 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift") pod "swift-storage-0" (UID: "4b58c504-f707-43fe-91ca-4328c58e998c") : configmap "swift-ring-files" not found Feb 03 10:23:33 crc kubenswrapper[5010]: W0203 10:23:33.917685 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cf6f6f7_d993_486c_9dcf_63d6b298f898.slice/crio-c42abc7a8375b4b278fa745e2a4991ab20a2e2a586627dc3875627dcc3f98e03 WatchSource:0}: Error finding container c42abc7a8375b4b278fa745e2a4991ab20a2e2a586627dc3875627dcc3f98e03: Status 404 returned error can't find the container with id c42abc7a8375b4b278fa745e2a4991ab20a2e2a586627dc3875627dcc3f98e03 Feb 03 10:23:33 crc kubenswrapper[5010]: W0203 10:23:33.919086 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a6faff8_cfd9_4253_8dc3_d3df2b3252be.slice/crio-d7f00bf0640736d45f71cd118c2254b0787ce4238feabdee74b5da5d9ba600e0 WatchSource:0}: Error finding container d7f00bf0640736d45f71cd118c2254b0787ce4238feabdee74b5da5d9ba600e0: Status 404 returned error can't find the container with id d7f00bf0640736d45f71cd118c2254b0787ce4238feabdee74b5da5d9ba600e0 Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.266622 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.312580 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.326635 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3037-account-create-update-847d2"] Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.394511 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-84hts"] Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.394748 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" podUID="3ea6e430-f9a6-4850-b58e-24ac04fd49a2" containerName="dnsmasq-dns" containerID="cri-o://2a39e93057d80e1a2e85ebc3a8a730552d12cf63e0e15cf7d8339a09d27bdab7" gracePeriod=10 Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.743610 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nh655" event={"ID":"7cf6f6f7-d993-486c-9dcf-63d6b298f898","Type":"ContainerStarted","Data":"867e48e65d90b62aadc6ddb63e004c04adf8450508e9b1413072265967186694"} Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.743661 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nh655" event={"ID":"7cf6f6f7-d993-486c-9dcf-63d6b298f898","Type":"ContainerStarted","Data":"c42abc7a8375b4b278fa745e2a4991ab20a2e2a586627dc3875627dcc3f98e03"} Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.748238 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-n8qtn" event={"ID":"65c9ffaf-83e3-47c1-a1e8-b097b371ccec","Type":"ContainerStarted","Data":"d6d0dcfaf8344c8474b2f870e0a3c246fba9c7b000a18b30741b2b813b8e10cd"} Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.750514 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-caa6-account-create-update-69sjp" event={"ID":"9a6faff8-cfd9-4253-8dc3-d3df2b3252be","Type":"ContainerStarted","Data":"e98e811059a9c2d02f4a30baf36100191798d1770e183f8268ccff78ece3d154"} Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.750562 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-caa6-account-create-update-69sjp" event={"ID":"9a6faff8-cfd9-4253-8dc3-d3df2b3252be","Type":"ContainerStarted","Data":"d7f00bf0640736d45f71cd118c2254b0787ce4238feabdee74b5da5d9ba600e0"} Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.753789 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3037-account-create-update-847d2" event={"ID":"9e03bfed-c1c6-4165-86c0-6c1415a30081","Type":"ContainerStarted","Data":"ecc37d219487243243570207ff635b3c963683b6d23c8e89c6a83dba41ce9ef2"} Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.753841 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3037-account-create-update-847d2" event={"ID":"9e03bfed-c1c6-4165-86c0-6c1415a30081","Type":"ContainerStarted","Data":"11adf87cd6aac6252b89e1d6e8378ed7df21239a139fae101d60fe883f287571"} Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.756065 5010 generic.go:334] "Generic (PLEG): container finished" podID="b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996" containerID="783df9142821b00a27f64292c3e26d0dec1e72fe32175024883cc3eb71e60b8b" exitCode=0 Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.756138 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9qjk8" event={"ID":"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996","Type":"ContainerDied","Data":"783df9142821b00a27f64292c3e26d0dec1e72fe32175024883cc3eb71e60b8b"} Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.756167 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9qjk8" event={"ID":"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996","Type":"ContainerStarted","Data":"b2840ce53bec4b0c6c02ff134b8c0fd5257ca07c7ed9500554ece5a4a25bfa04"} Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.761639 5010 generic.go:334] "Generic (PLEG): container finished" podID="3ea6e430-f9a6-4850-b58e-24ac04fd49a2" containerID="2a39e93057d80e1a2e85ebc3a8a730552d12cf63e0e15cf7d8339a09d27bdab7" exitCode=0 Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.761703 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" event={"ID":"3ea6e430-f9a6-4850-b58e-24ac04fd49a2","Type":"ContainerDied","Data":"2a39e93057d80e1a2e85ebc3a8a730552d12cf63e0e15cf7d8339a09d27bdab7"} Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.779504 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-3037-account-create-update-847d2" podStartSLOduration=3.779486167 podStartE2EDuration="3.779486167s" podCreationTimestamp="2026-02-03 10:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:23:34.775858573 +0000 UTC m=+1284.931834712" watchObservedRunningTime="2026-02-03 10:23:34.779486167 +0000 UTC m=+1284.935462296" Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.804413 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-n8qtn" podStartSLOduration=2.480469292 podStartE2EDuration="6.804394629s" podCreationTimestamp="2026-02-03 10:23:28 +0000 UTC" firstStartedPulling="2026-02-03 10:23:29.762477952 +0000 UTC m=+1279.918454081" lastFinishedPulling="2026-02-03 10:23:34.086403289 +0000 UTC m=+1284.242379418" observedRunningTime="2026-02-03 10:23:34.798162518 +0000 UTC m=+1284.954138657" watchObservedRunningTime="2026-02-03 10:23:34.804394629 +0000 UTC m=+1284.960370758" Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.823758 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-caa6-account-create-update-69sjp" podStartSLOduration=4.823725787 podStartE2EDuration="4.823725787s" podCreationTimestamp="2026-02-03 10:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:23:34.821762617 +0000 UTC m=+1284.977738756" watchObservedRunningTime="2026-02-03 10:23:34.823725787 +0000 UTC m=+1284.979701916" Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.893303 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.989965 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59czz\" (UniqueName: \"kubernetes.io/projected/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-kube-api-access-59czz\") pod \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.990180 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-ovsdbserver-nb\") pod \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.990244 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-config\") pod \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.990281 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-dns-svc\") pod \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.995563 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-kube-api-access-59czz" (OuterVolumeSpecName: "kube-api-access-59czz") pod "3ea6e430-f9a6-4850-b58e-24ac04fd49a2" (UID: "3ea6e430-f9a6-4850-b58e-24ac04fd49a2"). InnerVolumeSpecName "kube-api-access-59czz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.035128 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-config" (OuterVolumeSpecName: "config") pod "3ea6e430-f9a6-4850-b58e-24ac04fd49a2" (UID: "3ea6e430-f9a6-4850-b58e-24ac04fd49a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.035978 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3ea6e430-f9a6-4850-b58e-24ac04fd49a2" (UID: "3ea6e430-f9a6-4850-b58e-24ac04fd49a2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.050501 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3ea6e430-f9a6-4850-b58e-24ac04fd49a2" (UID: "3ea6e430-f9a6-4850-b58e-24ac04fd49a2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.092723 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.092762 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.092774 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.092786 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59czz\" (UniqueName: \"kubernetes.io/projected/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-kube-api-access-59czz\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.782634 5010 generic.go:334] "Generic (PLEG): container finished" podID="9e03bfed-c1c6-4165-86c0-6c1415a30081" containerID="ecc37d219487243243570207ff635b3c963683b6d23c8e89c6a83dba41ce9ef2" exitCode=0 Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.782731 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3037-account-create-update-847d2" event={"ID":"9e03bfed-c1c6-4165-86c0-6c1415a30081","Type":"ContainerDied","Data":"ecc37d219487243243570207ff635b3c963683b6d23c8e89c6a83dba41ce9ef2"} Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.785916 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.785908 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" event={"ID":"3ea6e430-f9a6-4850-b58e-24ac04fd49a2","Type":"ContainerDied","Data":"b237a98e3b61244f5b8cbba9933237b1c87653782e7c801f5d548e23ebd2e6d6"} Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.786069 5010 scope.go:117] "RemoveContainer" containerID="2a39e93057d80e1a2e85ebc3a8a730552d12cf63e0e15cf7d8339a09d27bdab7" Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.787801 5010 generic.go:334] "Generic (PLEG): container finished" podID="7cf6f6f7-d993-486c-9dcf-63d6b298f898" containerID="867e48e65d90b62aadc6ddb63e004c04adf8450508e9b1413072265967186694" exitCode=0 Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.787869 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nh655" event={"ID":"7cf6f6f7-d993-486c-9dcf-63d6b298f898","Type":"ContainerDied","Data":"867e48e65d90b62aadc6ddb63e004c04adf8450508e9b1413072265967186694"} Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.790801 5010 generic.go:334] "Generic (PLEG): container finished" podID="9a6faff8-cfd9-4253-8dc3-d3df2b3252be" containerID="e98e811059a9c2d02f4a30baf36100191798d1770e183f8268ccff78ece3d154" exitCode=0 Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.791011 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-caa6-account-create-update-69sjp" event={"ID":"9a6faff8-cfd9-4253-8dc3-d3df2b3252be","Type":"ContainerDied","Data":"e98e811059a9c2d02f4a30baf36100191798d1770e183f8268ccff78ece3d154"} Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.815714 5010 scope.go:117] "RemoveContainer" containerID="dcafe9c15b252f4afce63db43717e61b273dee3af36eabf6852fd51f8f27c930" Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.858708 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-84hts"] Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.867473 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-84hts"] Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.197356 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nh655" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.286557 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9qjk8" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.326738 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z462c\" (UniqueName: \"kubernetes.io/projected/7cf6f6f7-d993-486c-9dcf-63d6b298f898-kube-api-access-z462c\") pod \"7cf6f6f7-d993-486c-9dcf-63d6b298f898\" (UID: \"7cf6f6f7-d993-486c-9dcf-63d6b298f898\") " Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.326976 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cf6f6f7-d993-486c-9dcf-63d6b298f898-operator-scripts\") pod \"7cf6f6f7-d993-486c-9dcf-63d6b298f898\" (UID: \"7cf6f6f7-d993-486c-9dcf-63d6b298f898\") " Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.327758 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cf6f6f7-d993-486c-9dcf-63d6b298f898-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7cf6f6f7-d993-486c-9dcf-63d6b298f898" (UID: "7cf6f6f7-d993-486c-9dcf-63d6b298f898"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.332151 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cf6f6f7-d993-486c-9dcf-63d6b298f898-kube-api-access-z462c" (OuterVolumeSpecName: "kube-api-access-z462c") pod "7cf6f6f7-d993-486c-9dcf-63d6b298f898" (UID: "7cf6f6f7-d993-486c-9dcf-63d6b298f898"). InnerVolumeSpecName "kube-api-access-z462c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.428116 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996-operator-scripts\") pod \"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996\" (UID: \"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996\") " Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.428256 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxnpk\" (UniqueName: \"kubernetes.io/projected/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996-kube-api-access-cxnpk\") pod \"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996\" (UID: \"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996\") " Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.428595 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996" (UID: "b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.428722 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cf6f6f7-d993-486c-9dcf-63d6b298f898-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.428741 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z462c\" (UniqueName: \"kubernetes.io/projected/7cf6f6f7-d993-486c-9dcf-63d6b298f898-kube-api-access-z462c\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.428751 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.431058 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996-kube-api-access-cxnpk" (OuterVolumeSpecName: "kube-api-access-cxnpk") pod "b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996" (UID: "b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996"). InnerVolumeSpecName "kube-api-access-cxnpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.510128 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ea6e430-f9a6-4850-b58e-24ac04fd49a2" path="/var/lib/kubelet/pods/3ea6e430-f9a6-4850-b58e-24ac04fd49a2/volumes" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.529826 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxnpk\" (UniqueName: \"kubernetes.io/projected/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996-kube-api-access-cxnpk\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.620401 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-g8ncl"] Feb 03 10:23:36 crc kubenswrapper[5010]: E0203 10:23:36.620998 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea6e430-f9a6-4850-b58e-24ac04fd49a2" containerName="dnsmasq-dns" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.621034 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea6e430-f9a6-4850-b58e-24ac04fd49a2" containerName="dnsmasq-dns" Feb 03 10:23:36 crc kubenswrapper[5010]: E0203 10:23:36.621059 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cf6f6f7-d993-486c-9dcf-63d6b298f898" containerName="mariadb-database-create" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.621071 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cf6f6f7-d993-486c-9dcf-63d6b298f898" containerName="mariadb-database-create" Feb 03 10:23:36 crc kubenswrapper[5010]: E0203 10:23:36.621122 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea6e430-f9a6-4850-b58e-24ac04fd49a2" containerName="init" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.621134 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea6e430-f9a6-4850-b58e-24ac04fd49a2" containerName="init" Feb 03 10:23:36 crc kubenswrapper[5010]: E0203 10:23:36.621164 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996" containerName="mariadb-database-create" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.621175 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996" containerName="mariadb-database-create" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.621461 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea6e430-f9a6-4850-b58e-24ac04fd49a2" containerName="dnsmasq-dns" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.621507 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cf6f6f7-d993-486c-9dcf-63d6b298f898" containerName="mariadb-database-create" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.621528 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996" containerName="mariadb-database-create" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.622357 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-g8ncl" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.631326 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-g8ncl"] Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.706841 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-06a9-account-create-update-764vb"] Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.708529 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-06a9-account-create-update-764vb" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.710444 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.715411 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-06a9-account-create-update-764vb"] Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.734959 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jljbn\" (UniqueName: \"kubernetes.io/projected/0505d3aa-dab1-4f61-af12-69804ff1345a-kube-api-access-jljbn\") pod \"glance-db-create-g8ncl\" (UID: \"0505d3aa-dab1-4f61-af12-69804ff1345a\") " pod="openstack/glance-db-create-g8ncl" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.735058 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0505d3aa-dab1-4f61-af12-69804ff1345a-operator-scripts\") pod \"glance-db-create-g8ncl\" (UID: \"0505d3aa-dab1-4f61-af12-69804ff1345a\") " pod="openstack/glance-db-create-g8ncl" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.799478 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nh655" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.799469 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nh655" event={"ID":"7cf6f6f7-d993-486c-9dcf-63d6b298f898","Type":"ContainerDied","Data":"c42abc7a8375b4b278fa745e2a4991ab20a2e2a586627dc3875627dcc3f98e03"} Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.799619 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c42abc7a8375b4b278fa745e2a4991ab20a2e2a586627dc3875627dcc3f98e03" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.800724 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9qjk8" event={"ID":"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996","Type":"ContainerDied","Data":"b2840ce53bec4b0c6c02ff134b8c0fd5257ca07c7ed9500554ece5a4a25bfa04"} Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.800757 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2840ce53bec4b0c6c02ff134b8c0fd5257ca07c7ed9500554ece5a4a25bfa04" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.800803 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9qjk8" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.837989 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jljbn\" (UniqueName: \"kubernetes.io/projected/0505d3aa-dab1-4f61-af12-69804ff1345a-kube-api-access-jljbn\") pod \"glance-db-create-g8ncl\" (UID: \"0505d3aa-dab1-4f61-af12-69804ff1345a\") " pod="openstack/glance-db-create-g8ncl" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.838124 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0505d3aa-dab1-4f61-af12-69804ff1345a-operator-scripts\") pod \"glance-db-create-g8ncl\" (UID: \"0505d3aa-dab1-4f61-af12-69804ff1345a\") " pod="openstack/glance-db-create-g8ncl" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.838323 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2d0be64-0307-43ee-9c2c-905f1d22c267-operator-scripts\") pod \"glance-06a9-account-create-update-764vb\" (UID: \"e2d0be64-0307-43ee-9c2c-905f1d22c267\") " pod="openstack/glance-06a9-account-create-update-764vb" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.838398 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pfxf\" (UniqueName: \"kubernetes.io/projected/e2d0be64-0307-43ee-9c2c-905f1d22c267-kube-api-access-8pfxf\") pod \"glance-06a9-account-create-update-764vb\" (UID: \"e2d0be64-0307-43ee-9c2c-905f1d22c267\") " pod="openstack/glance-06a9-account-create-update-764vb" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.839795 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0505d3aa-dab1-4f61-af12-69804ff1345a-operator-scripts\") pod \"glance-db-create-g8ncl\" (UID: \"0505d3aa-dab1-4f61-af12-69804ff1345a\") " pod="openstack/glance-db-create-g8ncl" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.858197 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jljbn\" (UniqueName: \"kubernetes.io/projected/0505d3aa-dab1-4f61-af12-69804ff1345a-kube-api-access-jljbn\") pod \"glance-db-create-g8ncl\" (UID: \"0505d3aa-dab1-4f61-af12-69804ff1345a\") " pod="openstack/glance-db-create-g8ncl" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.938342 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-g8ncl" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.939810 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2d0be64-0307-43ee-9c2c-905f1d22c267-operator-scripts\") pod \"glance-06a9-account-create-update-764vb\" (UID: \"e2d0be64-0307-43ee-9c2c-905f1d22c267\") " pod="openstack/glance-06a9-account-create-update-764vb" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.939860 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pfxf\" (UniqueName: \"kubernetes.io/projected/e2d0be64-0307-43ee-9c2c-905f1d22c267-kube-api-access-8pfxf\") pod \"glance-06a9-account-create-update-764vb\" (UID: \"e2d0be64-0307-43ee-9c2c-905f1d22c267\") " pod="openstack/glance-06a9-account-create-update-764vb" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.940619 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2d0be64-0307-43ee-9c2c-905f1d22c267-operator-scripts\") pod \"glance-06a9-account-create-update-764vb\" (UID: \"e2d0be64-0307-43ee-9c2c-905f1d22c267\") " pod="openstack/glance-06a9-account-create-update-764vb" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.959485 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pfxf\" (UniqueName: \"kubernetes.io/projected/e2d0be64-0307-43ee-9c2c-905f1d22c267-kube-api-access-8pfxf\") pod \"glance-06a9-account-create-update-764vb\" (UID: \"e2d0be64-0307-43ee-9c2c-905f1d22c267\") " pod="openstack/glance-06a9-account-create-update-764vb" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.033801 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-06a9-account-create-update-764vb" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.311704 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3037-account-create-update-847d2" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.319703 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-caa6-account-create-update-69sjp" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.346731 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4bdc\" (UniqueName: \"kubernetes.io/projected/9a6faff8-cfd9-4253-8dc3-d3df2b3252be-kube-api-access-v4bdc\") pod \"9a6faff8-cfd9-4253-8dc3-d3df2b3252be\" (UID: \"9a6faff8-cfd9-4253-8dc3-d3df2b3252be\") " Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.346863 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e03bfed-c1c6-4165-86c0-6c1415a30081-operator-scripts\") pod \"9e03bfed-c1c6-4165-86c0-6c1415a30081\" (UID: \"9e03bfed-c1c6-4165-86c0-6c1415a30081\") " Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.346930 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a6faff8-cfd9-4253-8dc3-d3df2b3252be-operator-scripts\") pod \"9a6faff8-cfd9-4253-8dc3-d3df2b3252be\" (UID: \"9a6faff8-cfd9-4253-8dc3-d3df2b3252be\") " Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.346986 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn6zx\" (UniqueName: \"kubernetes.io/projected/9e03bfed-c1c6-4165-86c0-6c1415a30081-kube-api-access-zn6zx\") pod \"9e03bfed-c1c6-4165-86c0-6c1415a30081\" (UID: \"9e03bfed-c1c6-4165-86c0-6c1415a30081\") " Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.347878 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a6faff8-cfd9-4253-8dc3-d3df2b3252be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a6faff8-cfd9-4253-8dc3-d3df2b3252be" (UID: "9a6faff8-cfd9-4253-8dc3-d3df2b3252be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.348384 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e03bfed-c1c6-4165-86c0-6c1415a30081-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e03bfed-c1c6-4165-86c0-6c1415a30081" (UID: "9e03bfed-c1c6-4165-86c0-6c1415a30081"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.353053 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e03bfed-c1c6-4165-86c0-6c1415a30081-kube-api-access-zn6zx" (OuterVolumeSpecName: "kube-api-access-zn6zx") pod "9e03bfed-c1c6-4165-86c0-6c1415a30081" (UID: "9e03bfed-c1c6-4165-86c0-6c1415a30081"). InnerVolumeSpecName "kube-api-access-zn6zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.354047 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a6faff8-cfd9-4253-8dc3-d3df2b3252be-kube-api-access-v4bdc" (OuterVolumeSpecName: "kube-api-access-v4bdc") pod "9a6faff8-cfd9-4253-8dc3-d3df2b3252be" (UID: "9a6faff8-cfd9-4253-8dc3-d3df2b3252be"). InnerVolumeSpecName "kube-api-access-v4bdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.448528 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4bdc\" (UniqueName: \"kubernetes.io/projected/9a6faff8-cfd9-4253-8dc3-d3df2b3252be-kube-api-access-v4bdc\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.448567 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e03bfed-c1c6-4165-86c0-6c1415a30081-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.448579 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a6faff8-cfd9-4253-8dc3-d3df2b3252be-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.448589 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn6zx\" (UniqueName: \"kubernetes.io/projected/9e03bfed-c1c6-4165-86c0-6c1415a30081-kube-api-access-zn6zx\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.532552 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-g8ncl"] Feb 03 10:23:37 crc kubenswrapper[5010]: W0203 10:23:37.538965 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0505d3aa_dab1_4f61_af12_69804ff1345a.slice/crio-25fd6088ea16981c55151b34eeff70789b343789ceffc247fb3df94f61510c7f WatchSource:0}: Error finding container 25fd6088ea16981c55151b34eeff70789b343789ceffc247fb3df94f61510c7f: Status 404 returned error can't find the container with id 25fd6088ea16981c55151b34eeff70789b343789ceffc247fb3df94f61510c7f Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.652570 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-06a9-account-create-update-764vb"] Feb 03 10:23:37 crc kubenswrapper[5010]: W0203 10:23:37.654939 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2d0be64_0307_43ee_9c2c_905f1d22c267.slice/crio-0fb74049b6a21c7420ab7d6a07b8da1c6271bb44e5f1f8d80798d530bdb69a26 WatchSource:0}: Error finding container 0fb74049b6a21c7420ab7d6a07b8da1c6271bb44e5f1f8d80798d530bdb69a26: Status 404 returned error can't find the container with id 0fb74049b6a21c7420ab7d6a07b8da1c6271bb44e5f1f8d80798d530bdb69a26 Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.763299 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-nbvmd"] Feb 03 10:23:37 crc kubenswrapper[5010]: E0203 10:23:37.763635 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e03bfed-c1c6-4165-86c0-6c1415a30081" containerName="mariadb-account-create-update" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.763653 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e03bfed-c1c6-4165-86c0-6c1415a30081" containerName="mariadb-account-create-update" Feb 03 10:23:37 crc kubenswrapper[5010]: E0203 10:23:37.763674 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a6faff8-cfd9-4253-8dc3-d3df2b3252be" containerName="mariadb-account-create-update" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.763683 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a6faff8-cfd9-4253-8dc3-d3df2b3252be" containerName="mariadb-account-create-update" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.763866 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e03bfed-c1c6-4165-86c0-6c1415a30081" containerName="mariadb-account-create-update" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.763883 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a6faff8-cfd9-4253-8dc3-d3df2b3252be" containerName="mariadb-account-create-update" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.764464 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nbvmd" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.766369 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.775869 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nbvmd"] Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.809120 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3037-account-create-update-847d2" event={"ID":"9e03bfed-c1c6-4165-86c0-6c1415a30081","Type":"ContainerDied","Data":"11adf87cd6aac6252b89e1d6e8378ed7df21239a139fae101d60fe883f287571"} Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.809171 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11adf87cd6aac6252b89e1d6e8378ed7df21239a139fae101d60fe883f287571" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.809175 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3037-account-create-update-847d2" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.810882 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-g8ncl" event={"ID":"0505d3aa-dab1-4f61-af12-69804ff1345a","Type":"ContainerStarted","Data":"5e4e86c382f25cd8e9bad9e5d4a055df36fab11bdb33c4c29ebe01bd4ab0d270"} Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.810946 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-g8ncl" event={"ID":"0505d3aa-dab1-4f61-af12-69804ff1345a","Type":"ContainerStarted","Data":"25fd6088ea16981c55151b34eeff70789b343789ceffc247fb3df94f61510c7f"} Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.812053 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-06a9-account-create-update-764vb" event={"ID":"e2d0be64-0307-43ee-9c2c-905f1d22c267","Type":"ContainerStarted","Data":"0fb74049b6a21c7420ab7d6a07b8da1c6271bb44e5f1f8d80798d530bdb69a26"} Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.814649 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-caa6-account-create-update-69sjp" event={"ID":"9a6faff8-cfd9-4253-8dc3-d3df2b3252be","Type":"ContainerDied","Data":"d7f00bf0640736d45f71cd118c2254b0787ce4238feabdee74b5da5d9ba600e0"} Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.814697 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7f00bf0640736d45f71cd118c2254b0787ce4238feabdee74b5da5d9ba600e0" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.814732 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-caa6-account-create-update-69sjp" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.838551 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-g8ncl" podStartSLOduration=1.8385316569999999 podStartE2EDuration="1.838531657s" podCreationTimestamp="2026-02-03 10:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:23:37.828111138 +0000 UTC m=+1287.984087267" watchObservedRunningTime="2026-02-03 10:23:37.838531657 +0000 UTC m=+1287.994507786" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.857582 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbx7k\" (UniqueName: \"kubernetes.io/projected/55e89174-6261-4cf0-9d5a-a750c362b79a-kube-api-access-sbx7k\") pod \"root-account-create-update-nbvmd\" (UID: \"55e89174-6261-4cf0-9d5a-a750c362b79a\") " pod="openstack/root-account-create-update-nbvmd" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.857696 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55e89174-6261-4cf0-9d5a-a750c362b79a-operator-scripts\") pod \"root-account-create-update-nbvmd\" (UID: \"55e89174-6261-4cf0-9d5a-a750c362b79a\") " pod="openstack/root-account-create-update-nbvmd" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.959165 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbx7k\" (UniqueName: \"kubernetes.io/projected/55e89174-6261-4cf0-9d5a-a750c362b79a-kube-api-access-sbx7k\") pod \"root-account-create-update-nbvmd\" (UID: \"55e89174-6261-4cf0-9d5a-a750c362b79a\") " pod="openstack/root-account-create-update-nbvmd" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.959259 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55e89174-6261-4cf0-9d5a-a750c362b79a-operator-scripts\") pod \"root-account-create-update-nbvmd\" (UID: \"55e89174-6261-4cf0-9d5a-a750c362b79a\") " pod="openstack/root-account-create-update-nbvmd" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.962903 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55e89174-6261-4cf0-9d5a-a750c362b79a-operator-scripts\") pod \"root-account-create-update-nbvmd\" (UID: \"55e89174-6261-4cf0-9d5a-a750c362b79a\") " pod="openstack/root-account-create-update-nbvmd" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.976164 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbx7k\" (UniqueName: \"kubernetes.io/projected/55e89174-6261-4cf0-9d5a-a750c362b79a-kube-api-access-sbx7k\") pod \"root-account-create-update-nbvmd\" (UID: \"55e89174-6261-4cf0-9d5a-a750c362b79a\") " pod="openstack/root-account-create-update-nbvmd" Feb 03 10:23:38 crc kubenswrapper[5010]: E0203 10:23:38.003361 5010 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a6faff8_cfd9_4253_8dc3_d3df2b3252be.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0505d3aa_dab1_4f61_af12_69804ff1345a.slice/crio-5e4e86c382f25cd8e9bad9e5d4a055df36fab11bdb33c4c29ebe01bd4ab0d270.scope\": RecentStats: unable to find data in memory cache]" Feb 03 10:23:38 crc kubenswrapper[5010]: I0203 10:23:38.093094 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nbvmd" Feb 03 10:23:38 crc kubenswrapper[5010]: W0203 10:23:38.511271 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55e89174_6261_4cf0_9d5a_a750c362b79a.slice/crio-359eb915fcc11a138878bf839336ac69436afb76a57b48e722008cd5e4965ce1 WatchSource:0}: Error finding container 359eb915fcc11a138878bf839336ac69436afb76a57b48e722008cd5e4965ce1: Status 404 returned error can't find the container with id 359eb915fcc11a138878bf839336ac69436afb76a57b48e722008cd5e4965ce1 Feb 03 10:23:38 crc kubenswrapper[5010]: I0203 10:23:38.518144 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nbvmd"] Feb 03 10:23:38 crc kubenswrapper[5010]: I0203 10:23:38.826750 5010 generic.go:334] "Generic (PLEG): container finished" podID="e2d0be64-0307-43ee-9c2c-905f1d22c267" containerID="7faf76a4eb10f7d724f9bd83b1eb96f06a13d0bd092d0ededd050f56a18268b5" exitCode=0 Feb 03 10:23:38 crc kubenswrapper[5010]: I0203 10:23:38.826824 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-06a9-account-create-update-764vb" event={"ID":"e2d0be64-0307-43ee-9c2c-905f1d22c267","Type":"ContainerDied","Data":"7faf76a4eb10f7d724f9bd83b1eb96f06a13d0bd092d0ededd050f56a18268b5"} Feb 03 10:23:38 crc kubenswrapper[5010]: I0203 10:23:38.828692 5010 generic.go:334] "Generic (PLEG): container finished" podID="0505d3aa-dab1-4f61-af12-69804ff1345a" containerID="5e4e86c382f25cd8e9bad9e5d4a055df36fab11bdb33c4c29ebe01bd4ab0d270" exitCode=0 Feb 03 10:23:38 crc kubenswrapper[5010]: I0203 10:23:38.828754 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-g8ncl" event={"ID":"0505d3aa-dab1-4f61-af12-69804ff1345a","Type":"ContainerDied","Data":"5e4e86c382f25cd8e9bad9e5d4a055df36fab11bdb33c4c29ebe01bd4ab0d270"} Feb 03 10:23:38 crc kubenswrapper[5010]: I0203 10:23:38.830095 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nbvmd" event={"ID":"55e89174-6261-4cf0-9d5a-a750c362b79a","Type":"ContainerStarted","Data":"387dd9fd0160568ebec8f1a6d5d1c5088020bf051ddedc665506a7243fc7b05d"} Feb 03 10:23:38 crc kubenswrapper[5010]: I0203 10:23:38.830125 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nbvmd" event={"ID":"55e89174-6261-4cf0-9d5a-a750c362b79a","Type":"ContainerStarted","Data":"359eb915fcc11a138878bf839336ac69436afb76a57b48e722008cd5e4965ce1"} Feb 03 10:23:38 crc kubenswrapper[5010]: I0203 10:23:38.873547 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-nbvmd" podStartSLOduration=1.873531418 podStartE2EDuration="1.873531418s" podCreationTimestamp="2026-02-03 10:23:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:23:38.871937937 +0000 UTC m=+1289.027914076" watchObservedRunningTime="2026-02-03 10:23:38.873531418 +0000 UTC m=+1289.029507547" Feb 03 10:23:39 crc kubenswrapper[5010]: I0203 10:23:39.838288 5010 generic.go:334] "Generic (PLEG): container finished" podID="55e89174-6261-4cf0-9d5a-a750c362b79a" containerID="387dd9fd0160568ebec8f1a6d5d1c5088020bf051ddedc665506a7243fc7b05d" exitCode=0 Feb 03 10:23:39 crc kubenswrapper[5010]: I0203 10:23:39.838372 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nbvmd" event={"ID":"55e89174-6261-4cf0-9d5a-a750c362b79a","Type":"ContainerDied","Data":"387dd9fd0160568ebec8f1a6d5d1c5088020bf051ddedc665506a7243fc7b05d"} Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.227629 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-06a9-account-create-update-764vb" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.304000 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2d0be64-0307-43ee-9c2c-905f1d22c267-operator-scripts\") pod \"e2d0be64-0307-43ee-9c2c-905f1d22c267\" (UID: \"e2d0be64-0307-43ee-9c2c-905f1d22c267\") " Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.304169 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pfxf\" (UniqueName: \"kubernetes.io/projected/e2d0be64-0307-43ee-9c2c-905f1d22c267-kube-api-access-8pfxf\") pod \"e2d0be64-0307-43ee-9c2c-905f1d22c267\" (UID: \"e2d0be64-0307-43ee-9c2c-905f1d22c267\") " Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.305592 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d0be64-0307-43ee-9c2c-905f1d22c267-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2d0be64-0307-43ee-9c2c-905f1d22c267" (UID: "e2d0be64-0307-43ee-9c2c-905f1d22c267"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.307847 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-g8ncl" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.310620 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d0be64-0307-43ee-9c2c-905f1d22c267-kube-api-access-8pfxf" (OuterVolumeSpecName: "kube-api-access-8pfxf") pod "e2d0be64-0307-43ee-9c2c-905f1d22c267" (UID: "e2d0be64-0307-43ee-9c2c-905f1d22c267"). InnerVolumeSpecName "kube-api-access-8pfxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.405017 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jljbn\" (UniqueName: \"kubernetes.io/projected/0505d3aa-dab1-4f61-af12-69804ff1345a-kube-api-access-jljbn\") pod \"0505d3aa-dab1-4f61-af12-69804ff1345a\" (UID: \"0505d3aa-dab1-4f61-af12-69804ff1345a\") " Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.405140 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0505d3aa-dab1-4f61-af12-69804ff1345a-operator-scripts\") pod \"0505d3aa-dab1-4f61-af12-69804ff1345a\" (UID: \"0505d3aa-dab1-4f61-af12-69804ff1345a\") " Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.405668 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2d0be64-0307-43ee-9c2c-905f1d22c267-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.405701 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pfxf\" (UniqueName: \"kubernetes.io/projected/e2d0be64-0307-43ee-9c2c-905f1d22c267-kube-api-access-8pfxf\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.405664 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0505d3aa-dab1-4f61-af12-69804ff1345a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0505d3aa-dab1-4f61-af12-69804ff1345a" (UID: "0505d3aa-dab1-4f61-af12-69804ff1345a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.408325 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0505d3aa-dab1-4f61-af12-69804ff1345a-kube-api-access-jljbn" (OuterVolumeSpecName: "kube-api-access-jljbn") pod "0505d3aa-dab1-4f61-af12-69804ff1345a" (UID: "0505d3aa-dab1-4f61-af12-69804ff1345a"). InnerVolumeSpecName "kube-api-access-jljbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.507703 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jljbn\" (UniqueName: \"kubernetes.io/projected/0505d3aa-dab1-4f61-af12-69804ff1345a-kube-api-access-jljbn\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.507732 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0505d3aa-dab1-4f61-af12-69804ff1345a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.852303 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-06a9-account-create-update-764vb" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.852337 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-06a9-account-create-update-764vb" event={"ID":"e2d0be64-0307-43ee-9c2c-905f1d22c267","Type":"ContainerDied","Data":"0fb74049b6a21c7420ab7d6a07b8da1c6271bb44e5f1f8d80798d530bdb69a26"} Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.852383 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fb74049b6a21c7420ab7d6a07b8da1c6271bb44e5f1f8d80798d530bdb69a26" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.857957 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-g8ncl" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.857954 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-g8ncl" event={"ID":"0505d3aa-dab1-4f61-af12-69804ff1345a","Type":"ContainerDied","Data":"25fd6088ea16981c55151b34eeff70789b343789ceffc247fb3df94f61510c7f"} Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.858161 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25fd6088ea16981c55151b34eeff70789b343789ceffc247fb3df94f61510c7f" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.914647 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:40 crc kubenswrapper[5010]: E0203 10:23:40.914876 5010 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 10:23:40 crc kubenswrapper[5010]: E0203 10:23:40.914906 5010 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 10:23:40 crc kubenswrapper[5010]: E0203 10:23:40.914976 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift podName:4b58c504-f707-43fe-91ca-4328c58e998c nodeName:}" failed. No retries permitted until 2026-02-03 10:23:56.914955325 +0000 UTC m=+1307.070931454 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift") pod "swift-storage-0" (UID: "4b58c504-f707-43fe-91ca-4328c58e998c") : configmap "swift-ring-files" not found Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.234227 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nbvmd" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.319763 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbx7k\" (UniqueName: \"kubernetes.io/projected/55e89174-6261-4cf0-9d5a-a750c362b79a-kube-api-access-sbx7k\") pod \"55e89174-6261-4cf0-9d5a-a750c362b79a\" (UID: \"55e89174-6261-4cf0-9d5a-a750c362b79a\") " Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.320012 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55e89174-6261-4cf0-9d5a-a750c362b79a-operator-scripts\") pod \"55e89174-6261-4cf0-9d5a-a750c362b79a\" (UID: \"55e89174-6261-4cf0-9d5a-a750c362b79a\") " Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.320492 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55e89174-6261-4cf0-9d5a-a750c362b79a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55e89174-6261-4cf0-9d5a-a750c362b79a" (UID: "55e89174-6261-4cf0-9d5a-a750c362b79a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.324574 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55e89174-6261-4cf0-9d5a-a750c362b79a-kube-api-access-sbx7k" (OuterVolumeSpecName: "kube-api-access-sbx7k") pod "55e89174-6261-4cf0-9d5a-a750c362b79a" (UID: "55e89174-6261-4cf0-9d5a-a750c362b79a"). InnerVolumeSpecName "kube-api-access-sbx7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.424395 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55e89174-6261-4cf0-9d5a-a750c362b79a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.424437 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbx7k\" (UniqueName: \"kubernetes.io/projected/55e89174-6261-4cf0-9d5a-a750c362b79a-kube-api-access-sbx7k\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.782119 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-xlhhb"] Feb 03 10:23:41 crc kubenswrapper[5010]: E0203 10:23:41.782448 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0505d3aa-dab1-4f61-af12-69804ff1345a" containerName="mariadb-database-create" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.782464 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="0505d3aa-dab1-4f61-af12-69804ff1345a" containerName="mariadb-database-create" Feb 03 10:23:41 crc kubenswrapper[5010]: E0203 10:23:41.782477 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d0be64-0307-43ee-9c2c-905f1d22c267" containerName="mariadb-account-create-update" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.782484 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d0be64-0307-43ee-9c2c-905f1d22c267" containerName="mariadb-account-create-update" Feb 03 10:23:41 crc kubenswrapper[5010]: E0203 10:23:41.782506 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55e89174-6261-4cf0-9d5a-a750c362b79a" containerName="mariadb-account-create-update" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.782512 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="55e89174-6261-4cf0-9d5a-a750c362b79a" containerName="mariadb-account-create-update" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.782658 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d0be64-0307-43ee-9c2c-905f1d22c267" containerName="mariadb-account-create-update" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.782669 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="55e89174-6261-4cf0-9d5a-a750c362b79a" containerName="mariadb-account-create-update" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.782691 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="0505d3aa-dab1-4f61-af12-69804ff1345a" containerName="mariadb-database-create" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.783158 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.786728 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.786832 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mtbjz" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.800759 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xlhhb"] Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.833225 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-db-sync-config-data\") pod \"glance-db-sync-xlhhb\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.833295 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-combined-ca-bundle\") pod \"glance-db-sync-xlhhb\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.833333 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqxvx\" (UniqueName: \"kubernetes.io/projected/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-kube-api-access-nqxvx\") pod \"glance-db-sync-xlhhb\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.833482 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-config-data\") pod \"glance-db-sync-xlhhb\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.866418 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nbvmd" event={"ID":"55e89174-6261-4cf0-9d5a-a750c362b79a","Type":"ContainerDied","Data":"359eb915fcc11a138878bf839336ac69436afb76a57b48e722008cd5e4965ce1"} Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.866457 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="359eb915fcc11a138878bf839336ac69436afb76a57b48e722008cd5e4965ce1" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.866512 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nbvmd" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.875140 5010 generic.go:334] "Generic (PLEG): container finished" podID="65c9ffaf-83e3-47c1-a1e8-b097b371ccec" containerID="d6d0dcfaf8344c8474b2f870e0a3c246fba9c7b000a18b30741b2b813b8e10cd" exitCode=0 Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.875179 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-n8qtn" event={"ID":"65c9ffaf-83e3-47c1-a1e8-b097b371ccec","Type":"ContainerDied","Data":"d6d0dcfaf8344c8474b2f870e0a3c246fba9c7b000a18b30741b2b813b8e10cd"} Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.935524 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-db-sync-config-data\") pod \"glance-db-sync-xlhhb\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.935968 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-combined-ca-bundle\") pod \"glance-db-sync-xlhhb\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.936143 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqxvx\" (UniqueName: \"kubernetes.io/projected/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-kube-api-access-nqxvx\") pod \"glance-db-sync-xlhhb\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.936385 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-config-data\") pod \"glance-db-sync-xlhhb\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.940185 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-db-sync-config-data\") pod \"glance-db-sync-xlhhb\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.949124 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-combined-ca-bundle\") pod \"glance-db-sync-xlhhb\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.953255 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-config-data\") pod \"glance-db-sync-xlhhb\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.956985 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqxvx\" (UniqueName: \"kubernetes.io/projected/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-kube-api-access-nqxvx\") pod \"glance-db-sync-xlhhb\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:42 crc kubenswrapper[5010]: I0203 10:23:42.106533 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:42 crc kubenswrapper[5010]: I0203 10:23:42.605709 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xlhhb"] Feb 03 10:23:42 crc kubenswrapper[5010]: I0203 10:23:42.881692 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xlhhb" event={"ID":"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3","Type":"ContainerStarted","Data":"46779b8951b31f9858ffd66ac6e32f691ea2a94f077b82226673a024b7efc699"} Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.197900 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.257780 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7n9j\" (UniqueName: \"kubernetes.io/projected/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-kube-api-access-c7n9j\") pod \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.257840 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-ring-data-devices\") pod \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.257883 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-swiftconf\") pod \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.258008 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-combined-ca-bundle\") pod \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.258069 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-etc-swift\") pod \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.258098 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-dispersionconf\") pod \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.258160 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-scripts\") pod \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.259071 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "65c9ffaf-83e3-47c1-a1e8-b097b371ccec" (UID: "65c9ffaf-83e3-47c1-a1e8-b097b371ccec"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.260110 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "65c9ffaf-83e3-47c1-a1e8-b097b371ccec" (UID: "65c9ffaf-83e3-47c1-a1e8-b097b371ccec"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.265270 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-kube-api-access-c7n9j" (OuterVolumeSpecName: "kube-api-access-c7n9j") pod "65c9ffaf-83e3-47c1-a1e8-b097b371ccec" (UID: "65c9ffaf-83e3-47c1-a1e8-b097b371ccec"). InnerVolumeSpecName "kube-api-access-c7n9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.267044 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "65c9ffaf-83e3-47c1-a1e8-b097b371ccec" (UID: "65c9ffaf-83e3-47c1-a1e8-b097b371ccec"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.280081 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-scripts" (OuterVolumeSpecName: "scripts") pod "65c9ffaf-83e3-47c1-a1e8-b097b371ccec" (UID: "65c9ffaf-83e3-47c1-a1e8-b097b371ccec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.281605 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "65c9ffaf-83e3-47c1-a1e8-b097b371ccec" (UID: "65c9ffaf-83e3-47c1-a1e8-b097b371ccec"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.283438 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65c9ffaf-83e3-47c1-a1e8-b097b371ccec" (UID: "65c9ffaf-83e3-47c1-a1e8-b097b371ccec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.361592 5010 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.362483 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.362538 5010 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.362554 5010 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.362570 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.362590 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7n9j\" (UniqueName: \"kubernetes.io/projected/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-kube-api-access-c7n9j\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.362608 5010 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.892694 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-n8qtn" event={"ID":"65c9ffaf-83e3-47c1-a1e8-b097b371ccec","Type":"ContainerDied","Data":"05528d7b25b91ddd2d6931ebb207234211817db001ec48df5c320eaf05808c38"} Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.892737 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05528d7b25b91ddd2d6931ebb207234211817db001ec48df5c320eaf05808c38" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.892777 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:44 crc kubenswrapper[5010]: I0203 10:23:44.753346 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-nbvmd"] Feb 03 10:23:44 crc kubenswrapper[5010]: I0203 10:23:44.759799 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-nbvmd"] Feb 03 10:23:44 crc kubenswrapper[5010]: I0203 10:23:44.904336 5010 generic.go:334] "Generic (PLEG): container finished" podID="2ce83ed2-cbef-4045-8822-6f58268b28b3" containerID="10e7a7e1923769d25869f1642046743d27038f14081a9edd79e0d2a9d1c7d095" exitCode=0 Feb 03 10:23:44 crc kubenswrapper[5010]: I0203 10:23:44.904425 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ce83ed2-cbef-4045-8822-6f58268b28b3","Type":"ContainerDied","Data":"10e7a7e1923769d25869f1642046743d27038f14081a9edd79e0d2a9d1c7d095"} Feb 03 10:23:44 crc kubenswrapper[5010]: I0203 10:23:44.911077 5010 generic.go:334] "Generic (PLEG): container finished" podID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" containerID="35eaa2b360c11ef3168d683fc2f67400b01f08b1d9f58aea46291a308a02faae" exitCode=0 Feb 03 10:23:44 crc kubenswrapper[5010]: I0203 10:23:44.911116 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f2066c8b-8b89-4dcb-972d-aea4dcd1c105","Type":"ContainerDied","Data":"35eaa2b360c11ef3168d683fc2f67400b01f08b1d9f58aea46291a308a02faae"} Feb 03 10:23:45 crc kubenswrapper[5010]: I0203 10:23:45.921777 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ce83ed2-cbef-4045-8822-6f58268b28b3","Type":"ContainerStarted","Data":"602c03e894fa88a9b33161b23751551ae10019029e054f5933d29cf4949f0620"} Feb 03 10:23:45 crc kubenswrapper[5010]: I0203 10:23:45.922312 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 03 10:23:45 crc kubenswrapper[5010]: I0203 10:23:45.924065 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f2066c8b-8b89-4dcb-972d-aea4dcd1c105","Type":"ContainerStarted","Data":"e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89"} Feb 03 10:23:45 crc kubenswrapper[5010]: I0203 10:23:45.924686 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:23:45 crc kubenswrapper[5010]: I0203 10:23:45.977141 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.387886458 podStartE2EDuration="1m19.977122243s" podCreationTimestamp="2026-02-03 10:22:26 +0000 UTC" firstStartedPulling="2026-02-03 10:22:29.023036027 +0000 UTC m=+1219.179012156" lastFinishedPulling="2026-02-03 10:23:10.612271802 +0000 UTC m=+1260.768247941" observedRunningTime="2026-02-03 10:23:45.948554847 +0000 UTC m=+1296.104530976" watchObservedRunningTime="2026-02-03 10:23:45.977122243 +0000 UTC m=+1296.133098362" Feb 03 10:23:45 crc kubenswrapper[5010]: I0203 10:23:45.977286 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.752866187 podStartE2EDuration="1m19.977282597s" podCreationTimestamp="2026-02-03 10:22:26 +0000 UTC" firstStartedPulling="2026-02-03 10:22:30.38547769 +0000 UTC m=+1220.541453819" lastFinishedPulling="2026-02-03 10:23:10.6098941 +0000 UTC m=+1260.765870229" observedRunningTime="2026-02-03 10:23:45.974416924 +0000 UTC m=+1296.130393063" watchObservedRunningTime="2026-02-03 10:23:45.977282597 +0000 UTC m=+1296.133258726" Feb 03 10:23:46 crc kubenswrapper[5010]: I0203 10:23:46.390472 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:23:46 crc kubenswrapper[5010]: I0203 10:23:46.390815 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:23:46 crc kubenswrapper[5010]: I0203 10:23:46.523949 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55e89174-6261-4cf0-9d5a-a750c362b79a" path="/var/lib/kubelet/pods/55e89174-6261-4cf0-9d5a-a750c362b79a/volumes" Feb 03 10:23:49 crc kubenswrapper[5010]: I0203 10:23:49.768653 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-742kg"] Feb 03 10:23:49 crc kubenswrapper[5010]: E0203 10:23:49.769434 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c9ffaf-83e3-47c1-a1e8-b097b371ccec" containerName="swift-ring-rebalance" Feb 03 10:23:49 crc kubenswrapper[5010]: I0203 10:23:49.769450 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c9ffaf-83e3-47c1-a1e8-b097b371ccec" containerName="swift-ring-rebalance" Feb 03 10:23:49 crc kubenswrapper[5010]: I0203 10:23:49.769654 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="65c9ffaf-83e3-47c1-a1e8-b097b371ccec" containerName="swift-ring-rebalance" Feb 03 10:23:49 crc kubenswrapper[5010]: I0203 10:23:49.770352 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-742kg" Feb 03 10:23:49 crc kubenswrapper[5010]: I0203 10:23:49.772575 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 03 10:23:49 crc kubenswrapper[5010]: I0203 10:23:49.783723 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-742kg"] Feb 03 10:23:49 crc kubenswrapper[5010]: I0203 10:23:49.916689 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6-operator-scripts\") pod \"root-account-create-update-742kg\" (UID: \"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6\") " pod="openstack/root-account-create-update-742kg" Feb 03 10:23:49 crc kubenswrapper[5010]: I0203 10:23:49.916840 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw5c5\" (UniqueName: \"kubernetes.io/projected/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6-kube-api-access-nw5c5\") pod \"root-account-create-update-742kg\" (UID: \"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6\") " pod="openstack/root-account-create-update-742kg" Feb 03 10:23:50 crc kubenswrapper[5010]: I0203 10:23:50.018788 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw5c5\" (UniqueName: \"kubernetes.io/projected/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6-kube-api-access-nw5c5\") pod \"root-account-create-update-742kg\" (UID: \"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6\") " pod="openstack/root-account-create-update-742kg" Feb 03 10:23:50 crc kubenswrapper[5010]: I0203 10:23:50.018983 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6-operator-scripts\") pod \"root-account-create-update-742kg\" (UID: \"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6\") " pod="openstack/root-account-create-update-742kg" Feb 03 10:23:50 crc kubenswrapper[5010]: I0203 10:23:50.019950 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6-operator-scripts\") pod \"root-account-create-update-742kg\" (UID: \"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6\") " pod="openstack/root-account-create-update-742kg" Feb 03 10:23:50 crc kubenswrapper[5010]: I0203 10:23:50.052633 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw5c5\" (UniqueName: \"kubernetes.io/projected/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6-kube-api-access-nw5c5\") pod \"root-account-create-update-742kg\" (UID: \"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6\") " pod="openstack/root-account-create-update-742kg" Feb 03 10:23:50 crc kubenswrapper[5010]: I0203 10:23:50.134360 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-742kg" Feb 03 10:23:51 crc kubenswrapper[5010]: I0203 10:23:51.913160 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ql6ht" podUID="1883c30e-4c38-468d-a5dc-91b07f167d67" containerName="ovn-controller" probeResult="failure" output=< Feb 03 10:23:51 crc kubenswrapper[5010]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 03 10:23:51 crc kubenswrapper[5010]: > Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.218554 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.221510 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.456666 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ql6ht-config-4w6d7"] Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.461910 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.469258 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.488410 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ql6ht-config-4w6d7"] Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.494939 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-log-ovn\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.495005 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6401d284-126c-4b35-b668-35a8844eb9bb-scripts\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.495100 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-run\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.495184 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6401d284-126c-4b35-b668-35a8844eb9bb-additional-scripts\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.495222 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-run-ovn\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.495256 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj2cp\" (UniqueName: \"kubernetes.io/projected/6401d284-126c-4b35-b668-35a8844eb9bb-kube-api-access-mj2cp\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.597095 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-log-ovn\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.597165 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6401d284-126c-4b35-b668-35a8844eb9bb-scripts\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.597207 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-run\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.597362 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6401d284-126c-4b35-b668-35a8844eb9bb-additional-scripts\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.597389 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-run-ovn\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.597425 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj2cp\" (UniqueName: \"kubernetes.io/projected/6401d284-126c-4b35-b668-35a8844eb9bb-kube-api-access-mj2cp\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.597611 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-run-ovn\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.597847 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-run\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.597958 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-log-ovn\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.598433 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6401d284-126c-4b35-b668-35a8844eb9bb-additional-scripts\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.599633 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6401d284-126c-4b35-b668-35a8844eb9bb-scripts\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.618349 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj2cp\" (UniqueName: \"kubernetes.io/projected/6401d284-126c-4b35-b668-35a8844eb9bb-kube-api-access-mj2cp\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.794624 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:56 crc kubenswrapper[5010]: I0203 10:23:56.980822 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ql6ht" podUID="1883c30e-4c38-468d-a5dc-91b07f167d67" containerName="ovn-controller" probeResult="failure" output=< Feb 03 10:23:56 crc kubenswrapper[5010]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 03 10:23:56 crc kubenswrapper[5010]: > Feb 03 10:23:57 crc kubenswrapper[5010]: I0203 10:23:57.025179 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:57 crc kubenswrapper[5010]: I0203 10:23:57.035454 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:57 crc kubenswrapper[5010]: I0203 10:23:57.132293 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 03 10:23:58 crc kubenswrapper[5010]: I0203 10:23:58.039555 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2ce83ed2-cbef-4045-8822-6f58268b28b3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.96:5671: connect: connection refused" Feb 03 10:23:58 crc kubenswrapper[5010]: I0203 10:23:58.619065 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Feb 03 10:24:00 crc kubenswrapper[5010]: E0203 10:24:00.814908 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 03 10:24:00 crc kubenswrapper[5010]: E0203 10:24:00.815459 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqxvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-xlhhb_openstack(a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:24:00 crc kubenswrapper[5010]: E0203 10:24:00.817173 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-xlhhb" podUID="a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3" Feb 03 10:24:01 crc kubenswrapper[5010]: I0203 10:24:01.242866 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ql6ht-config-4w6d7"] Feb 03 10:24:01 crc kubenswrapper[5010]: I0203 10:24:01.320437 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-742kg"] Feb 03 10:24:01 crc kubenswrapper[5010]: W0203 10:24:01.337600 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0efd6c3_d0dc_4ebc_a116_d7e811177fa6.slice/crio-c83431ad2e0e03f2949a3d629ee5e7c316fee3c8a2ec436126bdd8f80ca23545 WatchSource:0}: Error finding container c83431ad2e0e03f2949a3d629ee5e7c316fee3c8a2ec436126bdd8f80ca23545: Status 404 returned error can't find the container with id c83431ad2e0e03f2949a3d629ee5e7c316fee3c8a2ec436126bdd8f80ca23545 Feb 03 10:24:01 crc kubenswrapper[5010]: I0203 10:24:01.399588 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ql6ht-config-4w6d7" event={"ID":"6401d284-126c-4b35-b668-35a8844eb9bb","Type":"ContainerStarted","Data":"1e83757b2e759c43060f4e53f21842ec4f1d15d13cbd2a72d2127f16f38ae78d"} Feb 03 10:24:01 crc kubenswrapper[5010]: E0203 10:24:01.401736 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-xlhhb" podUID="a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3" Feb 03 10:24:01 crc kubenswrapper[5010]: I0203 10:24:01.571763 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 03 10:24:01 crc kubenswrapper[5010]: W0203 10:24:01.581694 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b58c504_f707_43fe_91ca_4328c58e998c.slice/crio-0eaae01f4b96a18589a8eada604baf15f3cc9bacb179bf7002392b15b4613a7f WatchSource:0}: Error finding container 0eaae01f4b96a18589a8eada604baf15f3cc9bacb179bf7002392b15b4613a7f: Status 404 returned error can't find the container with id 0eaae01f4b96a18589a8eada604baf15f3cc9bacb179bf7002392b15b4613a7f Feb 03 10:24:01 crc kubenswrapper[5010]: I0203 10:24:01.905884 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ql6ht" Feb 03 10:24:02 crc kubenswrapper[5010]: I0203 10:24:02.407879 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"0eaae01f4b96a18589a8eada604baf15f3cc9bacb179bf7002392b15b4613a7f"} Feb 03 10:24:02 crc kubenswrapper[5010]: I0203 10:24:02.410021 5010 generic.go:334] "Generic (PLEG): container finished" podID="6401d284-126c-4b35-b668-35a8844eb9bb" containerID="ecc134dc06388d88bee9d6893b38c4e64f29d454add40ba84636bf94ef646d8a" exitCode=0 Feb 03 10:24:02 crc kubenswrapper[5010]: I0203 10:24:02.410089 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ql6ht-config-4w6d7" event={"ID":"6401d284-126c-4b35-b668-35a8844eb9bb","Type":"ContainerDied","Data":"ecc134dc06388d88bee9d6893b38c4e64f29d454add40ba84636bf94ef646d8a"} Feb 03 10:24:02 crc kubenswrapper[5010]: I0203 10:24:02.411725 5010 generic.go:334] "Generic (PLEG): container finished" podID="c0efd6c3-d0dc-4ebc-a116-d7e811177fa6" containerID="b8b094bb4a4489910ae853a898b2603c46e5923639a21e30a68a2dca1eee68b8" exitCode=0 Feb 03 10:24:02 crc kubenswrapper[5010]: I0203 10:24:02.411752 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-742kg" event={"ID":"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6","Type":"ContainerDied","Data":"b8b094bb4a4489910ae853a898b2603c46e5923639a21e30a68a2dca1eee68b8"} Feb 03 10:24:02 crc kubenswrapper[5010]: I0203 10:24:02.411766 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-742kg" event={"ID":"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6","Type":"ContainerStarted","Data":"c83431ad2e0e03f2949a3d629ee5e7c316fee3c8a2ec436126bdd8f80ca23545"} Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.429459 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"db2a74b8f45f6c7de60dfd387527274c06d19c2dc0ac62cded7d6ed861fef928"} Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.430731 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"7c5201313cc638d3fde80ddc4c91f16178d4855a4de7218c1565d0b1a6a13512"} Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.740196 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.834996 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj2cp\" (UniqueName: \"kubernetes.io/projected/6401d284-126c-4b35-b668-35a8844eb9bb-kube-api-access-mj2cp\") pod \"6401d284-126c-4b35-b668-35a8844eb9bb\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.835129 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6401d284-126c-4b35-b668-35a8844eb9bb-scripts\") pod \"6401d284-126c-4b35-b668-35a8844eb9bb\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.835175 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-run\") pod \"6401d284-126c-4b35-b668-35a8844eb9bb\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.835446 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-run-ovn\") pod \"6401d284-126c-4b35-b668-35a8844eb9bb\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.835598 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-log-ovn\") pod \"6401d284-126c-4b35-b668-35a8844eb9bb\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.835666 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6401d284-126c-4b35-b668-35a8844eb9bb-additional-scripts\") pod \"6401d284-126c-4b35-b668-35a8844eb9bb\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.837163 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6401d284-126c-4b35-b668-35a8844eb9bb-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "6401d284-126c-4b35-b668-35a8844eb9bb" (UID: "6401d284-126c-4b35-b668-35a8844eb9bb"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.837715 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "6401d284-126c-4b35-b668-35a8844eb9bb" (UID: "6401d284-126c-4b35-b668-35a8844eb9bb"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.837730 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-run" (OuterVolumeSpecName: "var-run") pod "6401d284-126c-4b35-b668-35a8844eb9bb" (UID: "6401d284-126c-4b35-b668-35a8844eb9bb"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.837737 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "6401d284-126c-4b35-b668-35a8844eb9bb" (UID: "6401d284-126c-4b35-b668-35a8844eb9bb"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.839492 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6401d284-126c-4b35-b668-35a8844eb9bb-scripts" (OuterVolumeSpecName: "scripts") pod "6401d284-126c-4b35-b668-35a8844eb9bb" (UID: "6401d284-126c-4b35-b668-35a8844eb9bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.840104 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6401d284-126c-4b35-b668-35a8844eb9bb-kube-api-access-mj2cp" (OuterVolumeSpecName: "kube-api-access-mj2cp") pod "6401d284-126c-4b35-b668-35a8844eb9bb" (UID: "6401d284-126c-4b35-b668-35a8844eb9bb"). InnerVolumeSpecName "kube-api-access-mj2cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.848181 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-742kg" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.937276 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw5c5\" (UniqueName: \"kubernetes.io/projected/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6-kube-api-access-nw5c5\") pod \"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6\" (UID: \"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6\") " Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.937368 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6-operator-scripts\") pod \"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6\" (UID: \"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6\") " Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.937597 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj2cp\" (UniqueName: \"kubernetes.io/projected/6401d284-126c-4b35-b668-35a8844eb9bb-kube-api-access-mj2cp\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.937622 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6401d284-126c-4b35-b668-35a8844eb9bb-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.937636 5010 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-run\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.937644 5010 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.937654 5010 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.937663 5010 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6401d284-126c-4b35-b668-35a8844eb9bb-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.939207 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c0efd6c3-d0dc-4ebc-a116-d7e811177fa6" (UID: "c0efd6c3-d0dc-4ebc-a116-d7e811177fa6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.944246 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6-kube-api-access-nw5c5" (OuterVolumeSpecName: "kube-api-access-nw5c5") pod "c0efd6c3-d0dc-4ebc-a116-d7e811177fa6" (UID: "c0efd6c3-d0dc-4ebc-a116-d7e811177fa6"). InnerVolumeSpecName "kube-api-access-nw5c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:04 crc kubenswrapper[5010]: I0203 10:24:04.039957 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw5c5\" (UniqueName: \"kubernetes.io/projected/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6-kube-api-access-nw5c5\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:04 crc kubenswrapper[5010]: I0203 10:24:04.040486 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:04 crc kubenswrapper[5010]: I0203 10:24:04.444510 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:24:04 crc kubenswrapper[5010]: I0203 10:24:04.444502 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ql6ht-config-4w6d7" event={"ID":"6401d284-126c-4b35-b668-35a8844eb9bb","Type":"ContainerDied","Data":"1e83757b2e759c43060f4e53f21842ec4f1d15d13cbd2a72d2127f16f38ae78d"} Feb 03 10:24:04 crc kubenswrapper[5010]: I0203 10:24:04.444692 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e83757b2e759c43060f4e53f21842ec4f1d15d13cbd2a72d2127f16f38ae78d" Feb 03 10:24:04 crc kubenswrapper[5010]: I0203 10:24:04.446693 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-742kg" event={"ID":"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6","Type":"ContainerDied","Data":"c83431ad2e0e03f2949a3d629ee5e7c316fee3c8a2ec436126bdd8f80ca23545"} Feb 03 10:24:04 crc kubenswrapper[5010]: I0203 10:24:04.446716 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-742kg" Feb 03 10:24:04 crc kubenswrapper[5010]: I0203 10:24:04.446733 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c83431ad2e0e03f2949a3d629ee5e7c316fee3c8a2ec436126bdd8f80ca23545" Feb 03 10:24:04 crc kubenswrapper[5010]: I0203 10:24:04.449411 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"d1c2a530c0466b671134916ca72597adfc90c967b55e06d9fba59851902ec967"} Feb 03 10:24:04 crc kubenswrapper[5010]: I0203 10:24:04.449438 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"29effc9d44b4300198de5cfd88d55c8ad7bd542b084778e434bda412fc3f5c84"} Feb 03 10:24:04 crc kubenswrapper[5010]: I0203 10:24:04.872619 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ql6ht-config-4w6d7"] Feb 03 10:24:04 crc kubenswrapper[5010]: I0203 10:24:04.892817 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ql6ht-config-4w6d7"] Feb 03 10:24:06 crc kubenswrapper[5010]: I0203 10:24:06.512833 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6401d284-126c-4b35-b668-35a8844eb9bb" path="/var/lib/kubelet/pods/6401d284-126c-4b35-b668-35a8844eb9bb/volumes" Feb 03 10:24:07 crc kubenswrapper[5010]: I0203 10:24:07.681351 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"0cffea7078f46c14a80ad94482e3f71482a844ae18e5b5ec841cd848d2fe8e71"} Feb 03 10:24:07 crc kubenswrapper[5010]: I0203 10:24:07.681642 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"16443360c283954226183f93ac04429762280bd4c2147613462cb311b6496193"} Feb 03 10:24:07 crc kubenswrapper[5010]: I0203 10:24:07.681653 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"21414971b15e50b37e5ebd3f2bdf70d9842887d2857e5857563802e5f1a3f07f"} Feb 03 10:24:07 crc kubenswrapper[5010]: I0203 10:24:07.681661 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"1dedc0697d0e6e0ad551f52949751ad019da7d427b25b62e39ae6b61b076e0b7"} Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.361623 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.621462 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.680266 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-54zjm"] Feb 03 10:24:08 crc kubenswrapper[5010]: E0203 10:24:08.680669 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6401d284-126c-4b35-b668-35a8844eb9bb" containerName="ovn-config" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.680692 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="6401d284-126c-4b35-b668-35a8844eb9bb" containerName="ovn-config" Feb 03 10:24:08 crc kubenswrapper[5010]: E0203 10:24:08.680727 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0efd6c3-d0dc-4ebc-a116-d7e811177fa6" containerName="mariadb-account-create-update" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.680735 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0efd6c3-d0dc-4ebc-a116-d7e811177fa6" containerName="mariadb-account-create-update" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.680895 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0efd6c3-d0dc-4ebc-a116-d7e811177fa6" containerName="mariadb-account-create-update" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.680920 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="6401d284-126c-4b35-b668-35a8844eb9bb" containerName="ovn-config" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.681472 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-54zjm" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.720353 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-54zjm"] Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.789148 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-z7nxm"] Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.790530 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z7nxm" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.802403 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-z7nxm"] Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.860872 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tmgg\" (UniqueName: \"kubernetes.io/projected/9c0e1d98-9045-4a70-8021-ac7dcf843775-kube-api-access-8tmgg\") pod \"cinder-db-create-54zjm\" (UID: \"9c0e1d98-9045-4a70-8021-ac7dcf843775\") " pod="openstack/cinder-db-create-54zjm" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.861017 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c0e1d98-9045-4a70-8021-ac7dcf843775-operator-scripts\") pod \"cinder-db-create-54zjm\" (UID: \"9c0e1d98-9045-4a70-8021-ac7dcf843775\") " pod="openstack/cinder-db-create-54zjm" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.861064 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c5b7adb-c7e4-4014-b37f-674861868979-operator-scripts\") pod \"barbican-db-create-z7nxm\" (UID: \"1c5b7adb-c7e4-4014-b37f-674861868979\") " pod="openstack/barbican-db-create-z7nxm" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.861106 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd6dp\" (UniqueName: \"kubernetes.io/projected/1c5b7adb-c7e4-4014-b37f-674861868979-kube-api-access-hd6dp\") pod \"barbican-db-create-z7nxm\" (UID: \"1c5b7adb-c7e4-4014-b37f-674861868979\") " pod="openstack/barbican-db-create-z7nxm" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.910277 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-5b83-account-create-update-hrlzs"] Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.912015 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5b83-account-create-update-hrlzs" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.919259 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.026299 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c5b7adb-c7e4-4014-b37f-674861868979-operator-scripts\") pod \"barbican-db-create-z7nxm\" (UID: \"1c5b7adb-c7e4-4014-b37f-674861868979\") " pod="openstack/barbican-db-create-z7nxm" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.026582 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd6dp\" (UniqueName: \"kubernetes.io/projected/1c5b7adb-c7e4-4014-b37f-674861868979-kube-api-access-hd6dp\") pod \"barbican-db-create-z7nxm\" (UID: \"1c5b7adb-c7e4-4014-b37f-674861868979\") " pod="openstack/barbican-db-create-z7nxm" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.027064 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tmgg\" (UniqueName: \"kubernetes.io/projected/9c0e1d98-9045-4a70-8021-ac7dcf843775-kube-api-access-8tmgg\") pod \"cinder-db-create-54zjm\" (UID: \"9c0e1d98-9045-4a70-8021-ac7dcf843775\") " pod="openstack/cinder-db-create-54zjm" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.027290 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c0e1d98-9045-4a70-8021-ac7dcf843775-operator-scripts\") pod \"cinder-db-create-54zjm\" (UID: \"9c0e1d98-9045-4a70-8021-ac7dcf843775\") " pod="openstack/cinder-db-create-54zjm" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.027732 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c5b7adb-c7e4-4014-b37f-674861868979-operator-scripts\") pod \"barbican-db-create-z7nxm\" (UID: \"1c5b7adb-c7e4-4014-b37f-674861868979\") " pod="openstack/barbican-db-create-z7nxm" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.029555 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c0e1d98-9045-4a70-8021-ac7dcf843775-operator-scripts\") pod \"cinder-db-create-54zjm\" (UID: \"9c0e1d98-9045-4a70-8021-ac7dcf843775\") " pod="openstack/cinder-db-create-54zjm" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.078522 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd6dp\" (UniqueName: \"kubernetes.io/projected/1c5b7adb-c7e4-4014-b37f-674861868979-kube-api-access-hd6dp\") pod \"barbican-db-create-z7nxm\" (UID: \"1c5b7adb-c7e4-4014-b37f-674861868979\") " pod="openstack/barbican-db-create-z7nxm" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.079993 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5b83-account-create-update-hrlzs"] Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.086004 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tmgg\" (UniqueName: \"kubernetes.io/projected/9c0e1d98-9045-4a70-8021-ac7dcf843775-kube-api-access-8tmgg\") pod \"cinder-db-create-54zjm\" (UID: \"9c0e1d98-9045-4a70-8021-ac7dcf843775\") " pod="openstack/cinder-db-create-54zjm" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.118590 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z7nxm" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.121982 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-5fk6k"] Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.124527 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5fk6k" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.129517 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fce7685e-8301-4c02-8e1b-386646d84264-operator-scripts\") pod \"cinder-5b83-account-create-update-hrlzs\" (UID: \"fce7685e-8301-4c02-8e1b-386646d84264\") " pod="openstack/cinder-5b83-account-create-update-hrlzs" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.129613 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5p4x\" (UniqueName: \"kubernetes.io/projected/fce7685e-8301-4c02-8e1b-386646d84264-kube-api-access-m5p4x\") pod \"cinder-5b83-account-create-update-hrlzs\" (UID: \"fce7685e-8301-4c02-8e1b-386646d84264\") " pod="openstack/cinder-5b83-account-create-update-hrlzs" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.157455 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5fk6k"] Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.232276 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fce7685e-8301-4c02-8e1b-386646d84264-operator-scripts\") pod \"cinder-5b83-account-create-update-hrlzs\" (UID: \"fce7685e-8301-4c02-8e1b-386646d84264\") " pod="openstack/cinder-5b83-account-create-update-hrlzs" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.232379 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5p4x\" (UniqueName: \"kubernetes.io/projected/fce7685e-8301-4c02-8e1b-386646d84264-kube-api-access-m5p4x\") pod \"cinder-5b83-account-create-update-hrlzs\" (UID: \"fce7685e-8301-4c02-8e1b-386646d84264\") " pod="openstack/cinder-5b83-account-create-update-hrlzs" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.232449 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83561b9b-ec1d-4ef5-bb05-48780834e40d-operator-scripts\") pod \"neutron-db-create-5fk6k\" (UID: \"83561b9b-ec1d-4ef5-bb05-48780834e40d\") " pod="openstack/neutron-db-create-5fk6k" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.232504 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smzrg\" (UniqueName: \"kubernetes.io/projected/83561b9b-ec1d-4ef5-bb05-48780834e40d-kube-api-access-smzrg\") pod \"neutron-db-create-5fk6k\" (UID: \"83561b9b-ec1d-4ef5-bb05-48780834e40d\") " pod="openstack/neutron-db-create-5fk6k" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.234813 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fce7685e-8301-4c02-8e1b-386646d84264-operator-scripts\") pod \"cinder-5b83-account-create-update-hrlzs\" (UID: \"fce7685e-8301-4c02-8e1b-386646d84264\") " pod="openstack/cinder-5b83-account-create-update-hrlzs" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.242741 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-f06e-account-create-update-glqr6"] Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.243900 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f06e-account-create-update-glqr6" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.256496 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.261910 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5p4x\" (UniqueName: \"kubernetes.io/projected/fce7685e-8301-4c02-8e1b-386646d84264-kube-api-access-m5p4x\") pod \"cinder-5b83-account-create-update-hrlzs\" (UID: \"fce7685e-8301-4c02-8e1b-386646d84264\") " pod="openstack/cinder-5b83-account-create-update-hrlzs" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.277610 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f06e-account-create-update-glqr6"] Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.325822 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-54zjm" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.335853 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83561b9b-ec1d-4ef5-bb05-48780834e40d-operator-scripts\") pod \"neutron-db-create-5fk6k\" (UID: \"83561b9b-ec1d-4ef5-bb05-48780834e40d\") " pod="openstack/neutron-db-create-5fk6k" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.335963 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smzrg\" (UniqueName: \"kubernetes.io/projected/83561b9b-ec1d-4ef5-bb05-48780834e40d-kube-api-access-smzrg\") pod \"neutron-db-create-5fk6k\" (UID: \"83561b9b-ec1d-4ef5-bb05-48780834e40d\") " pod="openstack/neutron-db-create-5fk6k" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.336032 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpsd2\" (UniqueName: \"kubernetes.io/projected/8144e4b8-89a7-4c08-86b9-219ea9d4645c-kube-api-access-tpsd2\") pod \"barbican-f06e-account-create-update-glqr6\" (UID: \"8144e4b8-89a7-4c08-86b9-219ea9d4645c\") " pod="openstack/barbican-f06e-account-create-update-glqr6" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.336113 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8144e4b8-89a7-4c08-86b9-219ea9d4645c-operator-scripts\") pod \"barbican-f06e-account-create-update-glqr6\" (UID: \"8144e4b8-89a7-4c08-86b9-219ea9d4645c\") " pod="openstack/barbican-f06e-account-create-update-glqr6" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.337375 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83561b9b-ec1d-4ef5-bb05-48780834e40d-operator-scripts\") pod \"neutron-db-create-5fk6k\" (UID: \"83561b9b-ec1d-4ef5-bb05-48780834e40d\") " pod="openstack/neutron-db-create-5fk6k" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.364311 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smzrg\" (UniqueName: \"kubernetes.io/projected/83561b9b-ec1d-4ef5-bb05-48780834e40d-kube-api-access-smzrg\") pod \"neutron-db-create-5fk6k\" (UID: \"83561b9b-ec1d-4ef5-bb05-48780834e40d\") " pod="openstack/neutron-db-create-5fk6k" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.437233 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpsd2\" (UniqueName: \"kubernetes.io/projected/8144e4b8-89a7-4c08-86b9-219ea9d4645c-kube-api-access-tpsd2\") pod \"barbican-f06e-account-create-update-glqr6\" (UID: \"8144e4b8-89a7-4c08-86b9-219ea9d4645c\") " pod="openstack/barbican-f06e-account-create-update-glqr6" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.437324 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8144e4b8-89a7-4c08-86b9-219ea9d4645c-operator-scripts\") pod \"barbican-f06e-account-create-update-glqr6\" (UID: \"8144e4b8-89a7-4c08-86b9-219ea9d4645c\") " pod="openstack/barbican-f06e-account-create-update-glqr6" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.438377 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8144e4b8-89a7-4c08-86b9-219ea9d4645c-operator-scripts\") pod \"barbican-f06e-account-create-update-glqr6\" (UID: \"8144e4b8-89a7-4c08-86b9-219ea9d4645c\") " pod="openstack/barbican-f06e-account-create-update-glqr6" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.544022 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5b83-account-create-update-hrlzs" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.560796 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5fk6k" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.605866 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpsd2\" (UniqueName: \"kubernetes.io/projected/8144e4b8-89a7-4c08-86b9-219ea9d4645c-kube-api-access-tpsd2\") pod \"barbican-f06e-account-create-update-glqr6\" (UID: \"8144e4b8-89a7-4c08-86b9-219ea9d4645c\") " pod="openstack/barbican-f06e-account-create-update-glqr6" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.607312 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f06e-account-create-update-glqr6" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.655774 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5102-account-create-update-nv7jr"] Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.657118 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5102-account-create-update-nv7jr" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.667072 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.715198 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5102-account-create-update-nv7jr"] Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.754050 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-b8wjx"] Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.755153 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90501abd-ab27-4c54-bd38-239e5803689b-operator-scripts\") pod \"neutron-5102-account-create-update-nv7jr\" (UID: \"90501abd-ab27-4c54-bd38-239e5803689b\") " pod="openstack/neutron-5102-account-create-update-nv7jr" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.755370 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnqgw\" (UniqueName: \"kubernetes.io/projected/90501abd-ab27-4c54-bd38-239e5803689b-kube-api-access-xnqgw\") pod \"neutron-5102-account-create-update-nv7jr\" (UID: \"90501abd-ab27-4c54-bd38-239e5803689b\") " pod="openstack/neutron-5102-account-create-update-nv7jr" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.755784 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.758997 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.759533 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.759810 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.762381 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xdhtt" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.807091 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-b8wjx"] Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.856864 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grp76\" (UniqueName: \"kubernetes.io/projected/a81f0078-44e5-4bbc-82ce-3d648e2e32db-kube-api-access-grp76\") pod \"keystone-db-sync-b8wjx\" (UID: \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\") " pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.856932 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90501abd-ab27-4c54-bd38-239e5803689b-operator-scripts\") pod \"neutron-5102-account-create-update-nv7jr\" (UID: \"90501abd-ab27-4c54-bd38-239e5803689b\") " pod="openstack/neutron-5102-account-create-update-nv7jr" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.856964 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81f0078-44e5-4bbc-82ce-3d648e2e32db-combined-ca-bundle\") pod \"keystone-db-sync-b8wjx\" (UID: \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\") " pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.857015 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnqgw\" (UniqueName: \"kubernetes.io/projected/90501abd-ab27-4c54-bd38-239e5803689b-kube-api-access-xnqgw\") pod \"neutron-5102-account-create-update-nv7jr\" (UID: \"90501abd-ab27-4c54-bd38-239e5803689b\") " pod="openstack/neutron-5102-account-create-update-nv7jr" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.857041 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81f0078-44e5-4bbc-82ce-3d648e2e32db-config-data\") pod \"keystone-db-sync-b8wjx\" (UID: \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\") " pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.857965 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90501abd-ab27-4c54-bd38-239e5803689b-operator-scripts\") pod \"neutron-5102-account-create-update-nv7jr\" (UID: \"90501abd-ab27-4c54-bd38-239e5803689b\") " pod="openstack/neutron-5102-account-create-update-nv7jr" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.958615 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81f0078-44e5-4bbc-82ce-3d648e2e32db-config-data\") pod \"keystone-db-sync-b8wjx\" (UID: \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\") " pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.958762 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grp76\" (UniqueName: \"kubernetes.io/projected/a81f0078-44e5-4bbc-82ce-3d648e2e32db-kube-api-access-grp76\") pod \"keystone-db-sync-b8wjx\" (UID: \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\") " pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.958799 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81f0078-44e5-4bbc-82ce-3d648e2e32db-combined-ca-bundle\") pod \"keystone-db-sync-b8wjx\" (UID: \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\") " pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.039371 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81f0078-44e5-4bbc-82ce-3d648e2e32db-config-data\") pod \"keystone-db-sync-b8wjx\" (UID: \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\") " pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.041247 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnqgw\" (UniqueName: \"kubernetes.io/projected/90501abd-ab27-4c54-bd38-239e5803689b-kube-api-access-xnqgw\") pod \"neutron-5102-account-create-update-nv7jr\" (UID: \"90501abd-ab27-4c54-bd38-239e5803689b\") " pod="openstack/neutron-5102-account-create-update-nv7jr" Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.041251 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81f0078-44e5-4bbc-82ce-3d648e2e32db-combined-ca-bundle\") pod \"keystone-db-sync-b8wjx\" (UID: \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\") " pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.050067 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grp76\" (UniqueName: \"kubernetes.io/projected/a81f0078-44e5-4bbc-82ce-3d648e2e32db-kube-api-access-grp76\") pod \"keystone-db-sync-b8wjx\" (UID: \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\") " pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.119783 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.299133 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5102-account-create-update-nv7jr" Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.449993 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-z7nxm"] Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.471291 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-54zjm"] Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.640994 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5fk6k"] Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.661591 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f06e-account-create-update-glqr6"] Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.668423 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5b83-account-create-update-hrlzs"] Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.816291 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.838316 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.220607 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-b8wjx"] Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.281005 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5102-account-create-update-nv7jr"] Feb 03 10:24:11 crc kubenswrapper[5010]: W0203 10:24:11.293512 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90501abd_ab27_4c54_bd38_239e5803689b.slice/crio-f8485215bb4e69bf51b493e589891df592e5976041e72593d4f67139fa1b872c WatchSource:0}: Error finding container f8485215bb4e69bf51b493e589891df592e5976041e72593d4f67139fa1b872c: Status 404 returned error can't find the container with id f8485215bb4e69bf51b493e589891df592e5976041e72593d4f67139fa1b872c Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.331015 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.757007 5010 generic.go:334] "Generic (PLEG): container finished" podID="83561b9b-ec1d-4ef5-bb05-48780834e40d" containerID="175dd1c77e9a4d7de137280af274a9e26cedb6a12f8e491f927188b800875447" exitCode=0 Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.757109 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5fk6k" event={"ID":"83561b9b-ec1d-4ef5-bb05-48780834e40d","Type":"ContainerDied","Data":"175dd1c77e9a4d7de137280af274a9e26cedb6a12f8e491f927188b800875447"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.757142 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5fk6k" event={"ID":"83561b9b-ec1d-4ef5-bb05-48780834e40d","Type":"ContainerStarted","Data":"af80050199b9095462b302599b666bc9450b38b9838b7bf7e684a30e30caf772"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.761164 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-b8wjx" event={"ID":"a81f0078-44e5-4bbc-82ce-3d648e2e32db","Type":"ContainerStarted","Data":"a7b60789589a796270441190392ade515cdcca0df1868691375db1fd1edbc5e5"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.783353 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"b836803d4779ec4a49b461a286a6d80e04b860b0b01da7dc4d4c40cfae68deeb"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.785708 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5b83-account-create-update-hrlzs" event={"ID":"fce7685e-8301-4c02-8e1b-386646d84264","Type":"ContainerStarted","Data":"5fd86f16e791f88f37d27cd6030a471785bd1ebc82355253888f61f74084bc56"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.785758 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5b83-account-create-update-hrlzs" event={"ID":"fce7685e-8301-4c02-8e1b-386646d84264","Type":"ContainerStarted","Data":"c13284cefec97b3f12efa97f85dd824080363feefcefccda188b362f41c20f43"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.789992 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f06e-account-create-update-glqr6" event={"ID":"8144e4b8-89a7-4c08-86b9-219ea9d4645c","Type":"ContainerStarted","Data":"ea0bf3943fa2c4dbc35b90869ad8099512a31ad225b933cd4437ed8cc1770bf0"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.790068 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f06e-account-create-update-glqr6" event={"ID":"8144e4b8-89a7-4c08-86b9-219ea9d4645c","Type":"ContainerStarted","Data":"37d7165e050b73a9b2db161747cacdeca84e8090a079b1b3dce24ed46e010bb6"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.792279 5010 generic.go:334] "Generic (PLEG): container finished" podID="9c0e1d98-9045-4a70-8021-ac7dcf843775" containerID="5168c22750de205db4c3cef2742987a3feeb1460c92bf43dadf92987bcb6f04e" exitCode=0 Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.792309 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-54zjm" event={"ID":"9c0e1d98-9045-4a70-8021-ac7dcf843775","Type":"ContainerDied","Data":"5168c22750de205db4c3cef2742987a3feeb1460c92bf43dadf92987bcb6f04e"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.792332 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-54zjm" event={"ID":"9c0e1d98-9045-4a70-8021-ac7dcf843775","Type":"ContainerStarted","Data":"6d47e0b433b06ac06d298258d90d3d0668c8ed77604ca3eaf431f6b0a84e592a"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.794499 5010 generic.go:334] "Generic (PLEG): container finished" podID="1c5b7adb-c7e4-4014-b37f-674861868979" containerID="6a575e19d1e33cee77eb78ea1b934b59f477f565a39712db7cebceb61e00a60f" exitCode=0 Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.794647 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-z7nxm" event={"ID":"1c5b7adb-c7e4-4014-b37f-674861868979","Type":"ContainerDied","Data":"6a575e19d1e33cee77eb78ea1b934b59f477f565a39712db7cebceb61e00a60f"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.794680 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-z7nxm" event={"ID":"1c5b7adb-c7e4-4014-b37f-674861868979","Type":"ContainerStarted","Data":"d25b0088e1755bc09fffce4f9f6579141c9392fb4e69275100ea890163ce1c0f"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.796130 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5102-account-create-update-nv7jr" event={"ID":"90501abd-ab27-4c54-bd38-239e5803689b","Type":"ContainerStarted","Data":"02a4a1176b9659935ba9d5084dc9f0a979b3bf3765756a868a98c381f2e4df2c"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.796157 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5102-account-create-update-nv7jr" event={"ID":"90501abd-ab27-4c54-bd38-239e5803689b","Type":"ContainerStarted","Data":"f8485215bb4e69bf51b493e589891df592e5976041e72593d4f67139fa1b872c"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.813542 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-5b83-account-create-update-hrlzs" podStartSLOduration=3.81351046 podStartE2EDuration="3.81351046s" podCreationTimestamp="2026-02-03 10:24:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:24:11.811737865 +0000 UTC m=+1321.967713994" watchObservedRunningTime="2026-02-03 10:24:11.81351046 +0000 UTC m=+1321.969486589" Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.831963 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5102-account-create-update-nv7jr" podStartSLOduration=2.831943515 podStartE2EDuration="2.831943515s" podCreationTimestamp="2026-02-03 10:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:24:11.826407783 +0000 UTC m=+1321.982383912" watchObservedRunningTime="2026-02-03 10:24:11.831943515 +0000 UTC m=+1321.987919644" Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.899550 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-f06e-account-create-update-glqr6" podStartSLOduration=2.8995204980000002 podStartE2EDuration="2.899520498s" podCreationTimestamp="2026-02-03 10:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:24:11.87515808 +0000 UTC m=+1322.031134209" watchObservedRunningTime="2026-02-03 10:24:11.899520498 +0000 UTC m=+1322.055496627" Feb 03 10:24:12 crc kubenswrapper[5010]: I0203 10:24:12.810722 5010 generic.go:334] "Generic (PLEG): container finished" podID="fce7685e-8301-4c02-8e1b-386646d84264" containerID="5fd86f16e791f88f37d27cd6030a471785bd1ebc82355253888f61f74084bc56" exitCode=0 Feb 03 10:24:12 crc kubenswrapper[5010]: I0203 10:24:12.813070 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5b83-account-create-update-hrlzs" event={"ID":"fce7685e-8301-4c02-8e1b-386646d84264","Type":"ContainerDied","Data":"5fd86f16e791f88f37d27cd6030a471785bd1ebc82355253888f61f74084bc56"} Feb 03 10:24:12 crc kubenswrapper[5010]: I0203 10:24:12.815825 5010 generic.go:334] "Generic (PLEG): container finished" podID="8144e4b8-89a7-4c08-86b9-219ea9d4645c" containerID="ea0bf3943fa2c4dbc35b90869ad8099512a31ad225b933cd4437ed8cc1770bf0" exitCode=0 Feb 03 10:24:12 crc kubenswrapper[5010]: I0203 10:24:12.815872 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f06e-account-create-update-glqr6" event={"ID":"8144e4b8-89a7-4c08-86b9-219ea9d4645c","Type":"ContainerDied","Data":"ea0bf3943fa2c4dbc35b90869ad8099512a31ad225b933cd4437ed8cc1770bf0"} Feb 03 10:24:12 crc kubenswrapper[5010]: I0203 10:24:12.845464 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"d1ce747a50d5f46d5b3c16c92f2b4f8b9e4ff276e546b1973d05421fc0f0d97e"} Feb 03 10:24:12 crc kubenswrapper[5010]: I0203 10:24:12.845521 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"b3d355cfd98d32954b37cf45219a4be1c32cf5b14c94e1a52df20c6c96e39cbd"} Feb 03 10:24:12 crc kubenswrapper[5010]: I0203 10:24:12.845532 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"80f81f8a1df2e968b1e4c71e1d5878acda4986e765bee0232e1d5fee79af9d39"} Feb 03 10:24:12 crc kubenswrapper[5010]: I0203 10:24:12.848163 5010 generic.go:334] "Generic (PLEG): container finished" podID="90501abd-ab27-4c54-bd38-239e5803689b" containerID="02a4a1176b9659935ba9d5084dc9f0a979b3bf3765756a868a98c381f2e4df2c" exitCode=0 Feb 03 10:24:12 crc kubenswrapper[5010]: I0203 10:24:12.848362 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5102-account-create-update-nv7jr" event={"ID":"90501abd-ab27-4c54-bd38-239e5803689b","Type":"ContainerDied","Data":"02a4a1176b9659935ba9d5084dc9f0a979b3bf3765756a868a98c381f2e4df2c"} Feb 03 10:24:13 crc kubenswrapper[5010]: I0203 10:24:13.908821 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"a932203b34d249fb2ffede1ec05b784720503da7b29b24c7ca9666515aa4cf12"} Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.822112 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5fk6k" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.842446 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-54zjm" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.845429 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z7nxm" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.858472 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smzrg\" (UniqueName: \"kubernetes.io/projected/83561b9b-ec1d-4ef5-bb05-48780834e40d-kube-api-access-smzrg\") pod \"83561b9b-ec1d-4ef5-bb05-48780834e40d\" (UID: \"83561b9b-ec1d-4ef5-bb05-48780834e40d\") " Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.858556 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83561b9b-ec1d-4ef5-bb05-48780834e40d-operator-scripts\") pod \"83561b9b-ec1d-4ef5-bb05-48780834e40d\" (UID: \"83561b9b-ec1d-4ef5-bb05-48780834e40d\") " Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.859926 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83561b9b-ec1d-4ef5-bb05-48780834e40d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83561b9b-ec1d-4ef5-bb05-48780834e40d" (UID: "83561b9b-ec1d-4ef5-bb05-48780834e40d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.874431 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83561b9b-ec1d-4ef5-bb05-48780834e40d-kube-api-access-smzrg" (OuterVolumeSpecName: "kube-api-access-smzrg") pod "83561b9b-ec1d-4ef5-bb05-48780834e40d" (UID: "83561b9b-ec1d-4ef5-bb05-48780834e40d"). InnerVolumeSpecName "kube-api-access-smzrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.940081 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-z7nxm" event={"ID":"1c5b7adb-c7e4-4014-b37f-674861868979","Type":"ContainerDied","Data":"d25b0088e1755bc09fffce4f9f6579141c9392fb4e69275100ea890163ce1c0f"} Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.940454 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d25b0088e1755bc09fffce4f9f6579141c9392fb4e69275100ea890163ce1c0f" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.940560 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z7nxm" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.952124 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f06e-account-create-update-glqr6" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.958642 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"cf26953bc1f2dd09b88d82a0c5f1103a17f6a80dcfb8303aa71f48cf4e96c654"} Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.960158 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c5b7adb-c7e4-4014-b37f-674861868979-operator-scripts\") pod \"1c5b7adb-c7e4-4014-b37f-674861868979\" (UID: \"1c5b7adb-c7e4-4014-b37f-674861868979\") " Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.960245 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c0e1d98-9045-4a70-8021-ac7dcf843775-operator-scripts\") pod \"9c0e1d98-9045-4a70-8021-ac7dcf843775\" (UID: \"9c0e1d98-9045-4a70-8021-ac7dcf843775\") " Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.960281 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tmgg\" (UniqueName: \"kubernetes.io/projected/9c0e1d98-9045-4a70-8021-ac7dcf843775-kube-api-access-8tmgg\") pod \"9c0e1d98-9045-4a70-8021-ac7dcf843775\" (UID: \"9c0e1d98-9045-4a70-8021-ac7dcf843775\") " Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.960503 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd6dp\" (UniqueName: \"kubernetes.io/projected/1c5b7adb-c7e4-4014-b37f-674861868979-kube-api-access-hd6dp\") pod \"1c5b7adb-c7e4-4014-b37f-674861868979\" (UID: \"1c5b7adb-c7e4-4014-b37f-674861868979\") " Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.960938 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83561b9b-ec1d-4ef5-bb05-48780834e40d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.960967 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smzrg\" (UniqueName: \"kubernetes.io/projected/83561b9b-ec1d-4ef5-bb05-48780834e40d-kube-api-access-smzrg\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.961018 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c0e1d98-9045-4a70-8021-ac7dcf843775-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c0e1d98-9045-4a70-8021-ac7dcf843775" (UID: "9c0e1d98-9045-4a70-8021-ac7dcf843775"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.961634 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c5b7adb-c7e4-4014-b37f-674861868979-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1c5b7adb-c7e4-4014-b37f-674861868979" (UID: "1c5b7adb-c7e4-4014-b37f-674861868979"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.963168 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5fk6k" event={"ID":"83561b9b-ec1d-4ef5-bb05-48780834e40d","Type":"ContainerDied","Data":"af80050199b9095462b302599b666bc9450b38b9838b7bf7e684a30e30caf772"} Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.963224 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af80050199b9095462b302599b666bc9450b38b9838b7bf7e684a30e30caf772" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.963230 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5fk6k" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.973910 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c5b7adb-c7e4-4014-b37f-674861868979-kube-api-access-hd6dp" (OuterVolumeSpecName: "kube-api-access-hd6dp") pod "1c5b7adb-c7e4-4014-b37f-674861868979" (UID: "1c5b7adb-c7e4-4014-b37f-674861868979"). InnerVolumeSpecName "kube-api-access-hd6dp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.976866 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-54zjm" event={"ID":"9c0e1d98-9045-4a70-8021-ac7dcf843775","Type":"ContainerDied","Data":"6d47e0b433b06ac06d298258d90d3d0668c8ed77604ca3eaf431f6b0a84e592a"} Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.976914 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d47e0b433b06ac06d298258d90d3d0668c8ed77604ca3eaf431f6b0a84e592a" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.976989 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-54zjm" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.980461 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c0e1d98-9045-4a70-8021-ac7dcf843775-kube-api-access-8tmgg" (OuterVolumeSpecName: "kube-api-access-8tmgg") pod "9c0e1d98-9045-4a70-8021-ac7dcf843775" (UID: "9c0e1d98-9045-4a70-8021-ac7dcf843775"). InnerVolumeSpecName "kube-api-access-8tmgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.984964 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5102-account-create-update-nv7jr" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.999471 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5b83-account-create-update-hrlzs" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.062157 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fce7685e-8301-4c02-8e1b-386646d84264-operator-scripts\") pod \"fce7685e-8301-4c02-8e1b-386646d84264\" (UID: \"fce7685e-8301-4c02-8e1b-386646d84264\") " Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.062409 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8144e4b8-89a7-4c08-86b9-219ea9d4645c-operator-scripts\") pod \"8144e4b8-89a7-4c08-86b9-219ea9d4645c\" (UID: \"8144e4b8-89a7-4c08-86b9-219ea9d4645c\") " Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.062467 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90501abd-ab27-4c54-bd38-239e5803689b-operator-scripts\") pod \"90501abd-ab27-4c54-bd38-239e5803689b\" (UID: \"90501abd-ab27-4c54-bd38-239e5803689b\") " Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.062601 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5p4x\" (UniqueName: \"kubernetes.io/projected/fce7685e-8301-4c02-8e1b-386646d84264-kube-api-access-m5p4x\") pod \"fce7685e-8301-4c02-8e1b-386646d84264\" (UID: \"fce7685e-8301-4c02-8e1b-386646d84264\") " Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.062671 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpsd2\" (UniqueName: \"kubernetes.io/projected/8144e4b8-89a7-4c08-86b9-219ea9d4645c-kube-api-access-tpsd2\") pod \"8144e4b8-89a7-4c08-86b9-219ea9d4645c\" (UID: \"8144e4b8-89a7-4c08-86b9-219ea9d4645c\") " Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.062753 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnqgw\" (UniqueName: \"kubernetes.io/projected/90501abd-ab27-4c54-bd38-239e5803689b-kube-api-access-xnqgw\") pod \"90501abd-ab27-4c54-bd38-239e5803689b\" (UID: \"90501abd-ab27-4c54-bd38-239e5803689b\") " Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.063193 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd6dp\" (UniqueName: \"kubernetes.io/projected/1c5b7adb-c7e4-4014-b37f-674861868979-kube-api-access-hd6dp\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.063231 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c5b7adb-c7e4-4014-b37f-674861868979-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.063245 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c0e1d98-9045-4a70-8021-ac7dcf843775-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.063256 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tmgg\" (UniqueName: \"kubernetes.io/projected/9c0e1d98-9045-4a70-8021-ac7dcf843775-kube-api-access-8tmgg\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.063328 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90501abd-ab27-4c54-bd38-239e5803689b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90501abd-ab27-4c54-bd38-239e5803689b" (UID: "90501abd-ab27-4c54-bd38-239e5803689b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.063947 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8144e4b8-89a7-4c08-86b9-219ea9d4645c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8144e4b8-89a7-4c08-86b9-219ea9d4645c" (UID: "8144e4b8-89a7-4c08-86b9-219ea9d4645c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.064462 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fce7685e-8301-4c02-8e1b-386646d84264-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fce7685e-8301-4c02-8e1b-386646d84264" (UID: "fce7685e-8301-4c02-8e1b-386646d84264"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.072519 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8144e4b8-89a7-4c08-86b9-219ea9d4645c-kube-api-access-tpsd2" (OuterVolumeSpecName: "kube-api-access-tpsd2") pod "8144e4b8-89a7-4c08-86b9-219ea9d4645c" (UID: "8144e4b8-89a7-4c08-86b9-219ea9d4645c"). InnerVolumeSpecName "kube-api-access-tpsd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.072605 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fce7685e-8301-4c02-8e1b-386646d84264-kube-api-access-m5p4x" (OuterVolumeSpecName: "kube-api-access-m5p4x") pod "fce7685e-8301-4c02-8e1b-386646d84264" (UID: "fce7685e-8301-4c02-8e1b-386646d84264"). InnerVolumeSpecName "kube-api-access-m5p4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.072693 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90501abd-ab27-4c54-bd38-239e5803689b-kube-api-access-xnqgw" (OuterVolumeSpecName: "kube-api-access-xnqgw") pod "90501abd-ab27-4c54-bd38-239e5803689b" (UID: "90501abd-ab27-4c54-bd38-239e5803689b"). InnerVolumeSpecName "kube-api-access-xnqgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.164856 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnqgw\" (UniqueName: \"kubernetes.io/projected/90501abd-ab27-4c54-bd38-239e5803689b-kube-api-access-xnqgw\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.164901 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fce7685e-8301-4c02-8e1b-386646d84264-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.164914 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8144e4b8-89a7-4c08-86b9-219ea9d4645c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.164925 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90501abd-ab27-4c54-bd38-239e5803689b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.164934 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5p4x\" (UniqueName: \"kubernetes.io/projected/fce7685e-8301-4c02-8e1b-386646d84264-kube-api-access-m5p4x\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.164945 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpsd2\" (UniqueName: \"kubernetes.io/projected/8144e4b8-89a7-4c08-86b9-219ea9d4645c-kube-api-access-tpsd2\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.017570 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f06e-account-create-update-glqr6" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.017558 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f06e-account-create-update-glqr6" event={"ID":"8144e4b8-89a7-4c08-86b9-219ea9d4645c","Type":"ContainerDied","Data":"37d7165e050b73a9b2db161747cacdeca84e8090a079b1b3dce24ed46e010bb6"} Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.018247 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37d7165e050b73a9b2db161747cacdeca84e8090a079b1b3dce24ed46e010bb6" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.061849 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"2af5bfc5fbc2a5eac12a440d02e92d50b536160904f5367797a5fcf2fcc9b3bc"} Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.067555 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5102-account-create-update-nv7jr" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.067570 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5102-account-create-update-nv7jr" event={"ID":"90501abd-ab27-4c54-bd38-239e5803689b","Type":"ContainerDied","Data":"f8485215bb4e69bf51b493e589891df592e5976041e72593d4f67139fa1b872c"} Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.067621 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8485215bb4e69bf51b493e589891df592e5976041e72593d4f67139fa1b872c" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.070490 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5b83-account-create-update-hrlzs" event={"ID":"fce7685e-8301-4c02-8e1b-386646d84264","Type":"ContainerDied","Data":"c13284cefec97b3f12efa97f85dd824080363feefcefccda188b362f41c20f43"} Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.070539 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c13284cefec97b3f12efa97f85dd824080363feefcefccda188b362f41c20f43" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.070585 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5b83-account-create-update-hrlzs" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.102083 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=43.800691703 podStartE2EDuration="53.102058186s" podCreationTimestamp="2026-02-03 10:23:23 +0000 UTC" firstStartedPulling="2026-02-03 10:24:01.585139949 +0000 UTC m=+1311.741116078" lastFinishedPulling="2026-02-03 10:24:10.886506432 +0000 UTC m=+1321.042482561" observedRunningTime="2026-02-03 10:24:16.097568101 +0000 UTC m=+1326.253544240" watchObservedRunningTime="2026-02-03 10:24:16.102058186 +0000 UTC m=+1326.258034315" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.392520 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.392582 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.449674 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-tpx4x"] Feb 03 10:24:16 crc kubenswrapper[5010]: E0203 10:24:16.450179 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90501abd-ab27-4c54-bd38-239e5803689b" containerName="mariadb-account-create-update" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.450209 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="90501abd-ab27-4c54-bd38-239e5803689b" containerName="mariadb-account-create-update" Feb 03 10:24:16 crc kubenswrapper[5010]: E0203 10:24:16.450262 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce7685e-8301-4c02-8e1b-386646d84264" containerName="mariadb-account-create-update" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.450270 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce7685e-8301-4c02-8e1b-386646d84264" containerName="mariadb-account-create-update" Feb 03 10:24:16 crc kubenswrapper[5010]: E0203 10:24:16.450283 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8144e4b8-89a7-4c08-86b9-219ea9d4645c" containerName="mariadb-account-create-update" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.450291 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="8144e4b8-89a7-4c08-86b9-219ea9d4645c" containerName="mariadb-account-create-update" Feb 03 10:24:16 crc kubenswrapper[5010]: E0203 10:24:16.450321 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c0e1d98-9045-4a70-8021-ac7dcf843775" containerName="mariadb-database-create" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.450327 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c0e1d98-9045-4a70-8021-ac7dcf843775" containerName="mariadb-database-create" Feb 03 10:24:16 crc kubenswrapper[5010]: E0203 10:24:16.450349 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c5b7adb-c7e4-4014-b37f-674861868979" containerName="mariadb-database-create" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.450355 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c5b7adb-c7e4-4014-b37f-674861868979" containerName="mariadb-database-create" Feb 03 10:24:16 crc kubenswrapper[5010]: E0203 10:24:16.450372 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83561b9b-ec1d-4ef5-bb05-48780834e40d" containerName="mariadb-database-create" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.450378 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="83561b9b-ec1d-4ef5-bb05-48780834e40d" containerName="mariadb-database-create" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.450573 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c5b7adb-c7e4-4014-b37f-674861868979" containerName="mariadb-database-create" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.450585 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="90501abd-ab27-4c54-bd38-239e5803689b" containerName="mariadb-account-create-update" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.450596 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="fce7685e-8301-4c02-8e1b-386646d84264" containerName="mariadb-account-create-update" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.450614 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="83561b9b-ec1d-4ef5-bb05-48780834e40d" containerName="mariadb-database-create" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.450628 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="8144e4b8-89a7-4c08-86b9-219ea9d4645c" containerName="mariadb-account-create-update" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.450644 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c0e1d98-9045-4a70-8021-ac7dcf843775" containerName="mariadb-database-create" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.451838 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.455795 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.465906 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-tpx4x"] Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.502139 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.502332 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-dns-svc\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.502378 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-config\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.502495 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqnmc\" (UniqueName: \"kubernetes.io/projected/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-kube-api-access-zqnmc\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.502522 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.502552 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.605204 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqnmc\" (UniqueName: \"kubernetes.io/projected/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-kube-api-access-zqnmc\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.605280 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.605310 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.605376 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.605450 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-dns-svc\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.605473 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-config\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.606540 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.606577 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-config\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.607259 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.608017 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.608068 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-dns-svc\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.633093 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqnmc\" (UniqueName: \"kubernetes.io/projected/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-kube-api-access-zqnmc\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.781191 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:20 crc kubenswrapper[5010]: W0203 10:24:20.281882 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eb55fd4_6f97_47c3_bd98_89ca6331cf88.slice/crio-93d0e004e008b5e1b05321fcaf14211b090b2038acd1b389851fdfc6ab3c1331 WatchSource:0}: Error finding container 93d0e004e008b5e1b05321fcaf14211b090b2038acd1b389851fdfc6ab3c1331: Status 404 returned error can't find the container with id 93d0e004e008b5e1b05321fcaf14211b090b2038acd1b389851fdfc6ab3c1331 Feb 03 10:24:20 crc kubenswrapper[5010]: I0203 10:24:20.317228 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-tpx4x"] Feb 03 10:24:21 crc kubenswrapper[5010]: I0203 10:24:21.225559 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-b8wjx" event={"ID":"a81f0078-44e5-4bbc-82ce-3d648e2e32db","Type":"ContainerStarted","Data":"3e8d95734ac813f12b8b00d5738e5d5d21869fee2e05c53312641bbb6e639906"} Feb 03 10:24:21 crc kubenswrapper[5010]: I0203 10:24:21.234172 5010 generic.go:334] "Generic (PLEG): container finished" podID="9eb55fd4-6f97-47c3-bd98-89ca6331cf88" containerID="9870cb3be829d265aa30927c41a48cc7802f5d65aec23cea9f8bcd10b02b6b19" exitCode=0 Feb 03 10:24:21 crc kubenswrapper[5010]: I0203 10:24:21.234257 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" event={"ID":"9eb55fd4-6f97-47c3-bd98-89ca6331cf88","Type":"ContainerDied","Data":"9870cb3be829d265aa30927c41a48cc7802f5d65aec23cea9f8bcd10b02b6b19"} Feb 03 10:24:21 crc kubenswrapper[5010]: I0203 10:24:21.234290 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" event={"ID":"9eb55fd4-6f97-47c3-bd98-89ca6331cf88","Type":"ContainerStarted","Data":"93d0e004e008b5e1b05321fcaf14211b090b2038acd1b389851fdfc6ab3c1331"} Feb 03 10:24:21 crc kubenswrapper[5010]: I0203 10:24:21.258010 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-b8wjx" podStartSLOduration=3.288603048 podStartE2EDuration="12.257986783s" podCreationTimestamp="2026-02-03 10:24:09 +0000 UTC" firstStartedPulling="2026-02-03 10:24:11.33018323 +0000 UTC m=+1321.486159359" lastFinishedPulling="2026-02-03 10:24:20.299566965 +0000 UTC m=+1330.455543094" observedRunningTime="2026-02-03 10:24:21.249165596 +0000 UTC m=+1331.405141725" watchObservedRunningTime="2026-02-03 10:24:21.257986783 +0000 UTC m=+1331.413962912" Feb 03 10:24:22 crc kubenswrapper[5010]: I0203 10:24:22.470609 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" event={"ID":"9eb55fd4-6f97-47c3-bd98-89ca6331cf88","Type":"ContainerStarted","Data":"c9a7cc65c09b93f157cada4e0c074bf50be6834a16b4169ebac2602a35731c7e"} Feb 03 10:24:22 crc kubenswrapper[5010]: I0203 10:24:22.472725 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:22 crc kubenswrapper[5010]: I0203 10:24:22.475725 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xlhhb" event={"ID":"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3","Type":"ContainerStarted","Data":"c2c236cbcbee82d440a00402bffa84360077e085e5045869a24060dbc0c3411c"} Feb 03 10:24:22 crc kubenswrapper[5010]: I0203 10:24:22.500411 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" podStartSLOduration=6.500386571 podStartE2EDuration="6.500386571s" podCreationTimestamp="2026-02-03 10:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:24:22.493066372 +0000 UTC m=+1332.649042511" watchObservedRunningTime="2026-02-03 10:24:22.500386571 +0000 UTC m=+1332.656362700" Feb 03 10:24:22 crc kubenswrapper[5010]: I0203 10:24:22.527578 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-xlhhb" podStartSLOduration=3.8322007080000002 podStartE2EDuration="41.527541231s" podCreationTimestamp="2026-02-03 10:23:41 +0000 UTC" firstStartedPulling="2026-02-03 10:23:42.611153141 +0000 UTC m=+1292.767129260" lastFinishedPulling="2026-02-03 10:24:20.306493654 +0000 UTC m=+1330.462469783" observedRunningTime="2026-02-03 10:24:22.516856575 +0000 UTC m=+1332.672832704" watchObservedRunningTime="2026-02-03 10:24:22.527541231 +0000 UTC m=+1332.683517380" Feb 03 10:24:26 crc kubenswrapper[5010]: I0203 10:24:26.784455 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:26 crc kubenswrapper[5010]: I0203 10:24:26.862588 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-c5kgf"] Feb 03 10:24:26 crc kubenswrapper[5010]: I0203 10:24:26.862959 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-c5kgf" podUID="44cce4a6-14dd-4b2d-9473-49edee803476" containerName="dnsmasq-dns" containerID="cri-o://f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98" gracePeriod=10 Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.384292 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.502975 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-dns-svc\") pod \"44cce4a6-14dd-4b2d-9473-49edee803476\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.503035 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-ovsdbserver-sb\") pod \"44cce4a6-14dd-4b2d-9473-49edee803476\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.503066 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-ovsdbserver-nb\") pod \"44cce4a6-14dd-4b2d-9473-49edee803476\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.503113 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-config\") pod \"44cce4a6-14dd-4b2d-9473-49edee803476\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.503326 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5td8\" (UniqueName: \"kubernetes.io/projected/44cce4a6-14dd-4b2d-9473-49edee803476-kube-api-access-s5td8\") pod \"44cce4a6-14dd-4b2d-9473-49edee803476\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.512311 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44cce4a6-14dd-4b2d-9473-49edee803476-kube-api-access-s5td8" (OuterVolumeSpecName: "kube-api-access-s5td8") pod "44cce4a6-14dd-4b2d-9473-49edee803476" (UID: "44cce4a6-14dd-4b2d-9473-49edee803476"). InnerVolumeSpecName "kube-api-access-s5td8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.561661 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "44cce4a6-14dd-4b2d-9473-49edee803476" (UID: "44cce4a6-14dd-4b2d-9473-49edee803476"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.565502 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "44cce4a6-14dd-4b2d-9473-49edee803476" (UID: "44cce4a6-14dd-4b2d-9473-49edee803476"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.574651 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-config" (OuterVolumeSpecName: "config") pod "44cce4a6-14dd-4b2d-9473-49edee803476" (UID: "44cce4a6-14dd-4b2d-9473-49edee803476"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.588054 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "44cce4a6-14dd-4b2d-9473-49edee803476" (UID: "44cce4a6-14dd-4b2d-9473-49edee803476"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.608061 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.608099 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5td8\" (UniqueName: \"kubernetes.io/projected/44cce4a6-14dd-4b2d-9473-49edee803476-kube-api-access-s5td8\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.608112 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.608138 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.608147 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.005975 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-c5kgf" event={"ID":"44cce4a6-14dd-4b2d-9473-49edee803476","Type":"ContainerDied","Data":"f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98"} Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.006067 5010 scope.go:117] "RemoveContainer" containerID="f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98" Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.006059 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.005911 5010 generic.go:334] "Generic (PLEG): container finished" podID="44cce4a6-14dd-4b2d-9473-49edee803476" containerID="f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98" exitCode=0 Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.006309 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-c5kgf" event={"ID":"44cce4a6-14dd-4b2d-9473-49edee803476","Type":"ContainerDied","Data":"7b4cc9746175c611db5edf3a8b25a3610c6d4de7b21e5812358190938f2ecfc7"} Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.050091 5010 scope.go:117] "RemoveContainer" containerID="3c57d1f02480e226663bd51d322aaf3512d8cb461ee5df04050137b40a4bc8cf" Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.073206 5010 scope.go:117] "RemoveContainer" containerID="f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98" Feb 03 10:24:28 crc kubenswrapper[5010]: E0203 10:24:28.074371 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98\": container with ID starting with f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98 not found: ID does not exist" containerID="f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98" Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.074426 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98"} err="failed to get container status \"f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98\": rpc error: code = NotFound desc = could not find container \"f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98\": container with ID starting with f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98 not found: ID does not exist" Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.074453 5010 scope.go:117] "RemoveContainer" containerID="3c57d1f02480e226663bd51d322aaf3512d8cb461ee5df04050137b40a4bc8cf" Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.074574 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-c5kgf"] Feb 03 10:24:28 crc kubenswrapper[5010]: E0203 10:24:28.075119 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c57d1f02480e226663bd51d322aaf3512d8cb461ee5df04050137b40a4bc8cf\": container with ID starting with 3c57d1f02480e226663bd51d322aaf3512d8cb461ee5df04050137b40a4bc8cf not found: ID does not exist" containerID="3c57d1f02480e226663bd51d322aaf3512d8cb461ee5df04050137b40a4bc8cf" Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.075153 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c57d1f02480e226663bd51d322aaf3512d8cb461ee5df04050137b40a4bc8cf"} err="failed to get container status \"3c57d1f02480e226663bd51d322aaf3512d8cb461ee5df04050137b40a4bc8cf\": rpc error: code = NotFound desc = could not find container \"3c57d1f02480e226663bd51d322aaf3512d8cb461ee5df04050137b40a4bc8cf\": container with ID starting with 3c57d1f02480e226663bd51d322aaf3512d8cb461ee5df04050137b40a4bc8cf not found: ID does not exist" Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.098875 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-c5kgf"] Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.516134 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44cce4a6-14dd-4b2d-9473-49edee803476" path="/var/lib/kubelet/pods/44cce4a6-14dd-4b2d-9473-49edee803476/volumes" Feb 03 10:24:30 crc kubenswrapper[5010]: I0203 10:24:30.030279 5010 generic.go:334] "Generic (PLEG): container finished" podID="a81f0078-44e5-4bbc-82ce-3d648e2e32db" containerID="3e8d95734ac813f12b8b00d5738e5d5d21869fee2e05c53312641bbb6e639906" exitCode=0 Feb 03 10:24:30 crc kubenswrapper[5010]: I0203 10:24:30.030327 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-b8wjx" event={"ID":"a81f0078-44e5-4bbc-82ce-3d648e2e32db","Type":"ContainerDied","Data":"3e8d95734ac813f12b8b00d5738e5d5d21869fee2e05c53312641bbb6e639906"} Feb 03 10:24:31 crc kubenswrapper[5010]: I0203 10:24:31.416036 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:31 crc kubenswrapper[5010]: I0203 10:24:31.490166 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grp76\" (UniqueName: \"kubernetes.io/projected/a81f0078-44e5-4bbc-82ce-3d648e2e32db-kube-api-access-grp76\") pod \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\" (UID: \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\") " Feb 03 10:24:31 crc kubenswrapper[5010]: I0203 10:24:31.490552 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81f0078-44e5-4bbc-82ce-3d648e2e32db-combined-ca-bundle\") pod \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\" (UID: \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\") " Feb 03 10:24:31 crc kubenswrapper[5010]: I0203 10:24:31.491683 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81f0078-44e5-4bbc-82ce-3d648e2e32db-config-data\") pod \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\" (UID: \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\") " Feb 03 10:24:31 crc kubenswrapper[5010]: I0203 10:24:31.506930 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a81f0078-44e5-4bbc-82ce-3d648e2e32db-kube-api-access-grp76" (OuterVolumeSpecName: "kube-api-access-grp76") pod "a81f0078-44e5-4bbc-82ce-3d648e2e32db" (UID: "a81f0078-44e5-4bbc-82ce-3d648e2e32db"). InnerVolumeSpecName "kube-api-access-grp76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:31 crc kubenswrapper[5010]: I0203 10:24:31.532684 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a81f0078-44e5-4bbc-82ce-3d648e2e32db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a81f0078-44e5-4bbc-82ce-3d648e2e32db" (UID: "a81f0078-44e5-4bbc-82ce-3d648e2e32db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:24:31 crc kubenswrapper[5010]: I0203 10:24:31.561367 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a81f0078-44e5-4bbc-82ce-3d648e2e32db-config-data" (OuterVolumeSpecName: "config-data") pod "a81f0078-44e5-4bbc-82ce-3d648e2e32db" (UID: "a81f0078-44e5-4bbc-82ce-3d648e2e32db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:24:31 crc kubenswrapper[5010]: I0203 10:24:31.596986 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81f0078-44e5-4bbc-82ce-3d648e2e32db-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:31 crc kubenswrapper[5010]: I0203 10:24:31.597048 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grp76\" (UniqueName: \"kubernetes.io/projected/a81f0078-44e5-4bbc-82ce-3d648e2e32db-kube-api-access-grp76\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:31 crc kubenswrapper[5010]: I0203 10:24:31.597067 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81f0078-44e5-4bbc-82ce-3d648e2e32db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.056040 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-b8wjx" event={"ID":"a81f0078-44e5-4bbc-82ce-3d648e2e32db","Type":"ContainerDied","Data":"a7b60789589a796270441190392ade515cdcca0df1868691375db1fd1edbc5e5"} Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.056094 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7b60789589a796270441190392ade515cdcca0df1868691375db1fd1edbc5e5" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.056170 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.494727 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-7w6tr"] Feb 03 10:24:32 crc kubenswrapper[5010]: E0203 10:24:32.499024 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a81f0078-44e5-4bbc-82ce-3d648e2e32db" containerName="keystone-db-sync" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.499173 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a81f0078-44e5-4bbc-82ce-3d648e2e32db" containerName="keystone-db-sync" Feb 03 10:24:32 crc kubenswrapper[5010]: E0203 10:24:32.499288 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44cce4a6-14dd-4b2d-9473-49edee803476" containerName="init" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.499305 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="44cce4a6-14dd-4b2d-9473-49edee803476" containerName="init" Feb 03 10:24:32 crc kubenswrapper[5010]: E0203 10:24:32.499901 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44cce4a6-14dd-4b2d-9473-49edee803476" containerName="dnsmasq-dns" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.499915 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="44cce4a6-14dd-4b2d-9473-49edee803476" containerName="dnsmasq-dns" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.500263 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="44cce4a6-14dd-4b2d-9473-49edee803476" containerName="dnsmasq-dns" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.500287 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="a81f0078-44e5-4bbc-82ce-3d648e2e32db" containerName="keystone-db-sync" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.509491 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.572508 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.573042 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xdhtt" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.573277 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.573052 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.573639 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.579276 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7w6tr"] Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.587923 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-gpttb"] Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.594330 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.620603 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-gpttb"] Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.632619 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-config-data\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.632695 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw5x7\" (UniqueName: \"kubernetes.io/projected/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-kube-api-access-lw5x7\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.632744 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-combined-ca-bundle\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.632779 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-scripts\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.632807 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-credential-keys\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.632863 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-fernet-keys\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.736986 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-dns-svc\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.737467 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw5x7\" (UniqueName: \"kubernetes.io/projected/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-kube-api-access-lw5x7\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.737620 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-combined-ca-bundle\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.737744 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-scripts\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.737847 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-config\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.737949 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-credential-keys\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.738154 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-fernet-keys\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.738533 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.738610 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.738692 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.738765 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rz9l\" (UniqueName: \"kubernetes.io/projected/378ea53a-1006-4116-a56d-7c466c494224-kube-api-access-9rz9l\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.738828 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-config-data\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.748614 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-config-data\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.750101 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-combined-ca-bundle\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.762405 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-fernet-keys\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.751629 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-scripts\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.753450 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-credential-keys\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.779690 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-mvrf4"] Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.781743 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.792762 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.793053 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.793555 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw5x7\" (UniqueName: \"kubernetes.io/projected/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-kube-api-access-lw5x7\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.805731 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-j789z" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.838927 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mvrf4"] Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.841012 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.841079 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.841147 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.841179 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rz9l\" (UniqueName: \"kubernetes.io/projected/378ea53a-1006-4116-a56d-7c466c494224-kube-api-access-9rz9l\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.841226 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-dns-svc\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.841314 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-config\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.842535 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-config\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.843737 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.844608 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.845231 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.845835 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-dns-svc\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.862318 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-57c9d98597-wmwqg"] Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.864615 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.874065 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-9bhsm" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.874372 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.874544 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.874697 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.875304 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.905767 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57c9d98597-wmwqg"] Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.947927 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-combined-ca-bundle\") pod \"neutron-db-sync-mvrf4\" (UID: \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\") " pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.948025 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdlkh\" (UniqueName: \"kubernetes.io/projected/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-kube-api-access-tdlkh\") pod \"neutron-db-sync-mvrf4\" (UID: \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\") " pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.948173 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-config\") pod \"neutron-db-sync-mvrf4\" (UID: \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\") " pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.958333 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.961379 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.968096 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rz9l\" (UniqueName: \"kubernetes.io/projected/378ea53a-1006-4116-a56d-7c466c494224-kube-api-access-9rz9l\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.980734 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.991469 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.993063 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.995873 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-b9wwp"] Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.007112 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.012815 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.013164 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-gk5q6" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.013326 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.041799 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-g6tdx"] Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.044106 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.050990 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-j94mw" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.051457 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.052785 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdlkh\" (UniqueName: \"kubernetes.io/projected/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-kube-api-access-tdlkh\") pod \"neutron-db-sync-mvrf4\" (UID: \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\") " pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.052834 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f771bc6-23e3-4382-89ea-f773805f789c-scripts\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.052934 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f771bc6-23e3-4382-89ea-f773805f789c-config-data\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.052973 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7f771bc6-23e3-4382-89ea-f773805f789c-horizon-secret-key\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.053048 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-config\") pod \"neutron-db-sync-mvrf4\" (UID: \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\") " pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.053082 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxtzm\" (UniqueName: \"kubernetes.io/projected/7f771bc6-23e3-4382-89ea-f773805f789c-kube-api-access-qxtzm\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.053143 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-combined-ca-bundle\") pod \"neutron-db-sync-mvrf4\" (UID: \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\") " pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.053170 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f771bc6-23e3-4382-89ea-f773805f789c-logs\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.066609 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-config\") pod \"neutron-db-sync-mvrf4\" (UID: \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\") " pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.083902 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-combined-ca-bundle\") pod \"neutron-db-sync-mvrf4\" (UID: \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\") " pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.104474 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-b9wwp"] Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.135349 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-g6tdx"] Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.152902 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdlkh\" (UniqueName: \"kubernetes.io/projected/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-kube-api-access-tdlkh\") pod \"neutron-db-sync-mvrf4\" (UID: \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\") " pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.155709 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-log-httpd\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.155790 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bad34e68-b20a-486c-b06b-e19f5aaaf917-combined-ca-bundle\") pod \"barbican-db-sync-g6tdx\" (UID: \"bad34e68-b20a-486c-b06b-e19f5aaaf917\") " pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.155841 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f771bc6-23e3-4382-89ea-f773805f789c-config-data\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.155868 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7f771bc6-23e3-4382-89ea-f773805f789c-horizon-secret-key\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.155912 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-combined-ca-bundle\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.155987 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rmrl\" (UniqueName: \"kubernetes.io/projected/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-kube-api-access-4rmrl\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156028 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-db-sync-config-data\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156063 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxtzm\" (UniqueName: \"kubernetes.io/projected/7f771bc6-23e3-4382-89ea-f773805f789c-kube-api-access-qxtzm\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156101 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156188 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-scripts\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156242 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-run-httpd\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156284 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f771bc6-23e3-4382-89ea-f773805f789c-logs\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156341 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1acc33e7-f3ae-4131-a003-aa6b592269c6-etc-machine-id\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156371 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156424 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bad34e68-b20a-486c-b06b-e19f5aaaf917-db-sync-config-data\") pod \"barbican-db-sync-g6tdx\" (UID: \"bad34e68-b20a-486c-b06b-e19f5aaaf917\") " pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156455 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f771bc6-23e3-4382-89ea-f773805f789c-scripts\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156522 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l7tp\" (UniqueName: \"kubernetes.io/projected/bad34e68-b20a-486c-b06b-e19f5aaaf917-kube-api-access-6l7tp\") pod \"barbican-db-sync-g6tdx\" (UID: \"bad34e68-b20a-486c-b06b-e19f5aaaf917\") " pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156553 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-config-data\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156590 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-scripts\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156617 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-config-data\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156643 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f846k\" (UniqueName: \"kubernetes.io/projected/1acc33e7-f3ae-4131-a003-aa6b592269c6-kube-api-access-f846k\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.159075 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f771bc6-23e3-4382-89ea-f773805f789c-logs\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.160093 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f771bc6-23e3-4382-89ea-f773805f789c-scripts\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.163550 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7f771bc6-23e3-4382-89ea-f773805f789c-horizon-secret-key\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.164505 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f771bc6-23e3-4382-89ea-f773805f789c-config-data\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.169632 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.220768 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxtzm\" (UniqueName: \"kubernetes.io/projected/7f771bc6-23e3-4382-89ea-f773805f789c-kube-api-access-qxtzm\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.244227 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259164 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bad34e68-b20a-486c-b06b-e19f5aaaf917-combined-ca-bundle\") pod \"barbican-db-sync-g6tdx\" (UID: \"bad34e68-b20a-486c-b06b-e19f5aaaf917\") " pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259293 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-combined-ca-bundle\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259334 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rmrl\" (UniqueName: \"kubernetes.io/projected/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-kube-api-access-4rmrl\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259367 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-db-sync-config-data\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259399 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259443 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-scripts\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259466 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-run-httpd\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259494 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1acc33e7-f3ae-4131-a003-aa6b592269c6-etc-machine-id\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259511 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259538 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bad34e68-b20a-486c-b06b-e19f5aaaf917-db-sync-config-data\") pod \"barbican-db-sync-g6tdx\" (UID: \"bad34e68-b20a-486c-b06b-e19f5aaaf917\") " pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259748 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l7tp\" (UniqueName: \"kubernetes.io/projected/bad34e68-b20a-486c-b06b-e19f5aaaf917-kube-api-access-6l7tp\") pod \"barbican-db-sync-g6tdx\" (UID: \"bad34e68-b20a-486c-b06b-e19f5aaaf917\") " pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259789 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-config-data\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259831 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-scripts\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259850 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-config-data\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259870 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f846k\" (UniqueName: \"kubernetes.io/projected/1acc33e7-f3ae-4131-a003-aa6b592269c6-kube-api-access-f846k\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259930 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-log-httpd\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.261002 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-log-httpd\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.265465 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bad34e68-b20a-486c-b06b-e19f5aaaf917-combined-ca-bundle\") pod \"barbican-db-sync-g6tdx\" (UID: \"bad34e68-b20a-486c-b06b-e19f5aaaf917\") " pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.270851 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-combined-ca-bundle\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.274425 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bad34e68-b20a-486c-b06b-e19f5aaaf917-db-sync-config-data\") pod \"barbican-db-sync-g6tdx\" (UID: \"bad34e68-b20a-486c-b06b-e19f5aaaf917\") " pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.275444 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.279701 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-config-data\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.280671 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-db-sync-config-data\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.281145 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-run-httpd\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.281226 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1acc33e7-f3ae-4131-a003-aa6b592269c6-etc-machine-id\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.282875 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-scripts\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.286639 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.287099 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-tptfc"] Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.289570 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-scripts\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.291539 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.302897 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-config-data\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.303792 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.304432 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.304568 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dtdfs" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.321343 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tptfc"] Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.346763 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-gpttb"] Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.362673 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-scripts\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.362734 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcm2f\" (UniqueName: \"kubernetes.io/projected/29ef610c-3c09-4b27-9b97-3a5350388caa-kube-api-access-wcm2f\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.362825 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29ef610c-3c09-4b27-9b97-3a5350388caa-logs\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.362893 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-config-data\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.362979 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-combined-ca-bundle\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.464554 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29ef610c-3c09-4b27-9b97-3a5350388caa-logs\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.464653 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-config-data\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.464744 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-combined-ca-bundle\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.464877 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-scripts\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.464918 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcm2f\" (UniqueName: \"kubernetes.io/projected/29ef610c-3c09-4b27-9b97-3a5350388caa-kube-api-access-wcm2f\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.466010 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29ef610c-3c09-4b27-9b97-3a5350388caa-logs\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.470731 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-config-data\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.473915 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-combined-ca-bundle\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.478135 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-scripts\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.509475 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.755153 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l7tp\" (UniqueName: \"kubernetes.io/projected/bad34e68-b20a-486c-b06b-e19f5aaaf917-kube-api-access-6l7tp\") pod \"barbican-db-sync-g6tdx\" (UID: \"bad34e68-b20a-486c-b06b-e19f5aaaf917\") " pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.765018 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f846k\" (UniqueName: \"kubernetes.io/projected/1acc33e7-f3ae-4131-a003-aa6b592269c6-kube-api-access-f846k\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.765563 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.792003 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rmrl\" (UniqueName: \"kubernetes.io/projected/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-kube-api-access-4rmrl\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.797287 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcm2f\" (UniqueName: \"kubernetes.io/projected/29ef610c-3c09-4b27-9b97-3a5350388caa-kube-api-access-wcm2f\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.800581 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.867260 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-r249m"] Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.593596 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.597652 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.616081 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.818943 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-config\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.819437 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.819519 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hhmq\" (UniqueName: \"kubernetes.io/projected/f7535aa4-5a5e-4663-b9c5-7822d0836660-kube-api-access-4hhmq\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.819626 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.820492 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.820647 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.856770 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-r249m"] Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.856799 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6548998769-npmxc"] Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.858062 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6548998769-npmxc"] Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.858083 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mvrf4"] Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.858161 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.927463 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.927578 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.927710 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-config\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.927754 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.927787 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hhmq\" (UniqueName: \"kubernetes.io/projected/f7535aa4-5a5e-4663-b9c5-7822d0836660-kube-api-access-4hhmq\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.927848 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.927949 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.929160 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.929156 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-config\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.929447 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.929543 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.949985 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hhmq\" (UniqueName: \"kubernetes.io/projected/f7535aa4-5a5e-4663-b9c5-7822d0836660-kube-api-access-4hhmq\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.031837 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr86z\" (UniqueName: \"kubernetes.io/projected/2f7faa93-7520-4d4b-b153-ed311effd90b-kube-api-access-cr86z\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.032464 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f7faa93-7520-4d4b-b153-ed311effd90b-logs\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.032591 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f7faa93-7520-4d4b-b153-ed311effd90b-scripts\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.032646 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f7faa93-7520-4d4b-b153-ed311effd90b-horizon-secret-key\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.032723 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f7faa93-7520-4d4b-b153-ed311effd90b-config-data\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.136631 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr86z\" (UniqueName: \"kubernetes.io/projected/2f7faa93-7520-4d4b-b153-ed311effd90b-kube-api-access-cr86z\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.136731 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f7faa93-7520-4d4b-b153-ed311effd90b-logs\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.136814 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f7faa93-7520-4d4b-b153-ed311effd90b-scripts\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.136862 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f7faa93-7520-4d4b-b153-ed311effd90b-horizon-secret-key\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.136908 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f7faa93-7520-4d4b-b153-ed311effd90b-config-data\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.138830 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f7faa93-7520-4d4b-b153-ed311effd90b-logs\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.139820 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f7faa93-7520-4d4b-b153-ed311effd90b-scripts\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.141171 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f7faa93-7520-4d4b-b153-ed311effd90b-config-data\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.166485 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.191546 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f7faa93-7520-4d4b-b153-ed311effd90b-horizon-secret-key\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.192652 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr86z\" (UniqueName: \"kubernetes.io/projected/2f7faa93-7520-4d4b-b153-ed311effd90b-kube-api-access-cr86z\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.484626 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.960762 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mvrf4" event={"ID":"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c","Type":"ContainerStarted","Data":"2b0073ad8287411e1d59389e4452039e032d8e37832a1112a2e60a18196d8ae0"} Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.171684 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-57c9d98597-wmwqg"] Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.231270 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5b5b4c5ff-x859r"] Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.234812 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.278742 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b5b4c5ff-x859r"] Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.316725 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d4dk\" (UniqueName: \"kubernetes.io/projected/716318b2-6f04-4ff9-94c2-e107ebf51cb6-kube-api-access-8d4dk\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.316779 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/716318b2-6f04-4ff9-94c2-e107ebf51cb6-logs\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.316828 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/716318b2-6f04-4ff9-94c2-e107ebf51cb6-config-data\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.316924 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/716318b2-6f04-4ff9-94c2-e107ebf51cb6-horizon-secret-key\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.316979 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/716318b2-6f04-4ff9-94c2-e107ebf51cb6-scripts\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.421236 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d4dk\" (UniqueName: \"kubernetes.io/projected/716318b2-6f04-4ff9-94c2-e107ebf51cb6-kube-api-access-8d4dk\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.421300 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/716318b2-6f04-4ff9-94c2-e107ebf51cb6-logs\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.421352 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/716318b2-6f04-4ff9-94c2-e107ebf51cb6-config-data\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.421429 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/716318b2-6f04-4ff9-94c2-e107ebf51cb6-horizon-secret-key\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.421487 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/716318b2-6f04-4ff9-94c2-e107ebf51cb6-scripts\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.422608 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/716318b2-6f04-4ff9-94c2-e107ebf51cb6-scripts\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.423205 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/716318b2-6f04-4ff9-94c2-e107ebf51cb6-logs\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.445146 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/716318b2-6f04-4ff9-94c2-e107ebf51cb6-config-data\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.464639 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d4dk\" (UniqueName: \"kubernetes.io/projected/716318b2-6f04-4ff9-94c2-e107ebf51cb6-kube-api-access-8d4dk\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.468067 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/716318b2-6f04-4ff9-94c2-e107ebf51cb6-horizon-secret-key\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.490763 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.542091 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-gpttb"] Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.552177 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-57c9d98597-wmwqg"] Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.568939 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: W0203 10:24:36.578458 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f771bc6_23e3_4382_89ea_f773805f789c.slice/crio-96801178c0f60b1be70f5a00384d47d9cf626976ce906ad24548febe89fb7fc8 WatchSource:0}: Error finding container 96801178c0f60b1be70f5a00384d47d9cf626976ce906ad24548febe89fb7fc8: Status 404 returned error can't find the container with id 96801178c0f60b1be70f5a00384d47d9cf626976ce906ad24548febe89fb7fc8 Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.600451 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.607604 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-g6tdx"] Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.697269 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7w6tr"] Feb 03 10:24:36 crc kubenswrapper[5010]: W0203 10:24:36.758205 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c75dd5e_8b56_4dc0_8e80_a6df3ec9a7ba.slice/crio-d2dbdaf7c4fb793e606130a48124449992f37d61583b140dcfaf7dbb8bb3f1d2 WatchSource:0}: Error finding container d2dbdaf7c4fb793e606130a48124449992f37d61583b140dcfaf7dbb8bb3f1d2: Status 404 returned error can't find the container with id d2dbdaf7c4fb793e606130a48124449992f37d61583b140dcfaf7dbb8bb3f1d2 Feb 03 10:24:37 crc kubenswrapper[5010]: I0203 10:24:37.115031 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tptfc"] Feb 03 10:24:37 crc kubenswrapper[5010]: I0203 10:24:37.138911 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57c9d98597-wmwqg" event={"ID":"7f771bc6-23e3-4382-89ea-f773805f789c","Type":"ContainerStarted","Data":"96801178c0f60b1be70f5a00384d47d9cf626976ce906ad24548febe89fb7fc8"} Feb 03 10:24:37 crc kubenswrapper[5010]: I0203 10:24:37.140325 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-b9wwp"] Feb 03 10:24:37 crc kubenswrapper[5010]: I0203 10:24:37.152197 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7w6tr" event={"ID":"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba","Type":"ContainerStarted","Data":"d2dbdaf7c4fb793e606130a48124449992f37d61583b140dcfaf7dbb8bb3f1d2"} Feb 03 10:24:37 crc kubenswrapper[5010]: I0203 10:24:37.167322 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-gpttb" event={"ID":"378ea53a-1006-4116-a56d-7c466c494224","Type":"ContainerStarted","Data":"359ae3ad38c8aceae2d332d6b3825bb94840bbf169efcda9149246e76b81e498"} Feb 03 10:24:37 crc kubenswrapper[5010]: I0203 10:24:37.176026 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-r249m"] Feb 03 10:24:37 crc kubenswrapper[5010]: I0203 10:24:37.178317 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mvrf4" event={"ID":"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c","Type":"ContainerStarted","Data":"2f477c6764bb977e8cc3e17e43a92a85fa737e9bdd4ffa07901f030c855e03b4"} Feb 03 10:24:37 crc kubenswrapper[5010]: W0203 10:24:37.189785 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7535aa4_5a5e_4663_b9c5_7822d0836660.slice/crio-9c2ae9a172420144ce552f204613ad111ecce479d2e000586e38710bc90ab902 WatchSource:0}: Error finding container 9c2ae9a172420144ce552f204613ad111ecce479d2e000586e38710bc90ab902: Status 404 returned error can't find the container with id 9c2ae9a172420144ce552f204613ad111ecce479d2e000586e38710bc90ab902 Feb 03 10:24:37 crc kubenswrapper[5010]: I0203 10:24:37.194688 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4338eb03-3ad6-4d68-8d8a-a37694aff6d7","Type":"ContainerStarted","Data":"61a59197d7bdf8ea63d4d37b8f71bb48f78f9037194046295bca9711dd2a3194"} Feb 03 10:24:37 crc kubenswrapper[5010]: I0203 10:24:37.194808 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6548998769-npmxc"] Feb 03 10:24:37 crc kubenswrapper[5010]: I0203 10:24:37.227626 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-mvrf4" podStartSLOduration=5.227591887 podStartE2EDuration="5.227591887s" podCreationTimestamp="2026-02-03 10:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:24:37.224409535 +0000 UTC m=+1347.380385664" watchObservedRunningTime="2026-02-03 10:24:37.227591887 +0000 UTC m=+1347.383568016" Feb 03 10:24:37 crc kubenswrapper[5010]: I0203 10:24:37.232551 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g6tdx" event={"ID":"bad34e68-b20a-486c-b06b-e19f5aaaf917","Type":"ContainerStarted","Data":"a9d5da882cdcbed71ee51c06f06cb45291d0d12cebefa2201b69150f2363476e"} Feb 03 10:24:37 crc kubenswrapper[5010]: I0203 10:24:37.391913 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b5b4c5ff-x859r"] Feb 03 10:24:38 crc kubenswrapper[5010]: I0203 10:24:38.311358 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tptfc" event={"ID":"29ef610c-3c09-4b27-9b97-3a5350388caa","Type":"ContainerStarted","Data":"8dff0c755a50d3ce83f3790da9a77abbdd3719d09b62bae731558162867118c1"} Feb 03 10:24:38 crc kubenswrapper[5010]: I0203 10:24:38.318646 5010 generic.go:334] "Generic (PLEG): container finished" podID="378ea53a-1006-4116-a56d-7c466c494224" containerID="00e55dbee70f472f8a93914d11cda4d852198236db1abda35bbcb237004b7327" exitCode=0 Feb 03 10:24:38 crc kubenswrapper[5010]: I0203 10:24:38.318834 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-gpttb" event={"ID":"378ea53a-1006-4116-a56d-7c466c494224","Type":"ContainerDied","Data":"00e55dbee70f472f8a93914d11cda4d852198236db1abda35bbcb237004b7327"} Feb 03 10:24:38 crc kubenswrapper[5010]: I0203 10:24:38.323522 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b5b4c5ff-x859r" event={"ID":"716318b2-6f04-4ff9-94c2-e107ebf51cb6","Type":"ContainerStarted","Data":"2db889447ff0bc0e6f1ca25bbfa660b5dc01678a634757b799ec80a5560e67e4"} Feb 03 10:24:38 crc kubenswrapper[5010]: I0203 10:24:38.330914 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6548998769-npmxc" event={"ID":"2f7faa93-7520-4d4b-b153-ed311effd90b","Type":"ContainerStarted","Data":"b292b07f4a535a045b80c60269a48c9544e180d091d0068c00e312baf2b8ddb0"} Feb 03 10:24:38 crc kubenswrapper[5010]: I0203 10:24:38.340072 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b9wwp" event={"ID":"1acc33e7-f3ae-4131-a003-aa6b592269c6","Type":"ContainerStarted","Data":"dcbb37a8fd2f82ef82d966d8287692e503ed1134f141d666defaaf1447e6aa0a"} Feb 03 10:24:38 crc kubenswrapper[5010]: I0203 10:24:38.356133 5010 generic.go:334] "Generic (PLEG): container finished" podID="f7535aa4-5a5e-4663-b9c5-7822d0836660" containerID="86940200a0f167ad56e8101970695c50456840462697eef05dc72062b5c839d7" exitCode=0 Feb 03 10:24:38 crc kubenswrapper[5010]: I0203 10:24:38.356294 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" event={"ID":"f7535aa4-5a5e-4663-b9c5-7822d0836660","Type":"ContainerDied","Data":"86940200a0f167ad56e8101970695c50456840462697eef05dc72062b5c839d7"} Feb 03 10:24:38 crc kubenswrapper[5010]: I0203 10:24:38.356335 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" event={"ID":"f7535aa4-5a5e-4663-b9c5-7822d0836660","Type":"ContainerStarted","Data":"9c2ae9a172420144ce552f204613ad111ecce479d2e000586e38710bc90ab902"} Feb 03 10:24:38 crc kubenswrapper[5010]: I0203 10:24:38.366651 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7w6tr" event={"ID":"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba","Type":"ContainerStarted","Data":"284a769b3c25b0cdea9e5ddf661cc8aed190c024694193ebf7516c57518d0765"} Feb 03 10:24:38 crc kubenswrapper[5010]: I0203 10:24:38.422413 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-7w6tr" podStartSLOduration=6.422376298 podStartE2EDuration="6.422376298s" podCreationTimestamp="2026-02-03 10:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:24:38.413486109 +0000 UTC m=+1348.569462238" watchObservedRunningTime="2026-02-03 10:24:38.422376298 +0000 UTC m=+1348.578352427" Feb 03 10:24:38 crc kubenswrapper[5010]: I0203 10:24:38.913978 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.054654 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-ovsdbserver-nb\") pod \"378ea53a-1006-4116-a56d-7c466c494224\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.054717 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-config\") pod \"378ea53a-1006-4116-a56d-7c466c494224\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.054891 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-dns-swift-storage-0\") pod \"378ea53a-1006-4116-a56d-7c466c494224\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.054937 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rz9l\" (UniqueName: \"kubernetes.io/projected/378ea53a-1006-4116-a56d-7c466c494224-kube-api-access-9rz9l\") pod \"378ea53a-1006-4116-a56d-7c466c494224\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.055034 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-dns-svc\") pod \"378ea53a-1006-4116-a56d-7c466c494224\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.055077 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-ovsdbserver-sb\") pod \"378ea53a-1006-4116-a56d-7c466c494224\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.081492 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/378ea53a-1006-4116-a56d-7c466c494224-kube-api-access-9rz9l" (OuterVolumeSpecName: "kube-api-access-9rz9l") pod "378ea53a-1006-4116-a56d-7c466c494224" (UID: "378ea53a-1006-4116-a56d-7c466c494224"). InnerVolumeSpecName "kube-api-access-9rz9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.094912 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "378ea53a-1006-4116-a56d-7c466c494224" (UID: "378ea53a-1006-4116-a56d-7c466c494224"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.102907 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "378ea53a-1006-4116-a56d-7c466c494224" (UID: "378ea53a-1006-4116-a56d-7c466c494224"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.119739 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "378ea53a-1006-4116-a56d-7c466c494224" (UID: "378ea53a-1006-4116-a56d-7c466c494224"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.124871 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-config" (OuterVolumeSpecName: "config") pod "378ea53a-1006-4116-a56d-7c466c494224" (UID: "378ea53a-1006-4116-a56d-7c466c494224"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.157656 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rz9l\" (UniqueName: \"kubernetes.io/projected/378ea53a-1006-4116-a56d-7c466c494224-kube-api-access-9rz9l\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.157701 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.157716 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.157727 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.157740 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.177404 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "378ea53a-1006-4116-a56d-7c466c494224" (UID: "378ea53a-1006-4116-a56d-7c466c494224"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.260646 5010 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.390748 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-gpttb" event={"ID":"378ea53a-1006-4116-a56d-7c466c494224","Type":"ContainerDied","Data":"359ae3ad38c8aceae2d332d6b3825bb94840bbf169efcda9149246e76b81e498"} Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.390839 5010 scope.go:117] "RemoveContainer" containerID="00e55dbee70f472f8a93914d11cda4d852198236db1abda35bbcb237004b7327" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.391034 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.419762 5010 generic.go:334] "Generic (PLEG): container finished" podID="a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3" containerID="c2c236cbcbee82d440a00402bffa84360077e085e5045869a24060dbc0c3411c" exitCode=0 Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.419926 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xlhhb" event={"ID":"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3","Type":"ContainerDied","Data":"c2c236cbcbee82d440a00402bffa84360077e085e5045869a24060dbc0c3411c"} Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.444732 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" event={"ID":"f7535aa4-5a5e-4663-b9c5-7822d0836660","Type":"ContainerStarted","Data":"54d52bbf972f2c68c46beb0620a95b30135d78a71e1e999b8b262f72fafa7a37"} Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.445394 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.518407 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" podStartSLOduration=6.518365582 podStartE2EDuration="6.518365582s" podCreationTimestamp="2026-02-03 10:24:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:24:39.49385574 +0000 UTC m=+1349.649831869" watchObservedRunningTime="2026-02-03 10:24:39.518365582 +0000 UTC m=+1349.674341711" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.615907 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-gpttb"] Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.630404 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-gpttb"] Feb 03 10:24:40 crc kubenswrapper[5010]: I0203 10:24:40.524442 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="378ea53a-1006-4116-a56d-7c466c494224" path="/var/lib/kubelet/pods/378ea53a-1006-4116-a56d-7c466c494224/volumes" Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.156683 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xlhhb" Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.237830 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-config-data\") pod \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.237985 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-db-sync-config-data\") pod \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.238113 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqxvx\" (UniqueName: \"kubernetes.io/projected/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-kube-api-access-nqxvx\") pod \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.238164 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-combined-ca-bundle\") pod \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.277586 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-kube-api-access-nqxvx" (OuterVolumeSpecName: "kube-api-access-nqxvx") pod "a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3" (UID: "a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3"). InnerVolumeSpecName "kube-api-access-nqxvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.287491 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3" (UID: "a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.341133 5010 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.341195 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqxvx\" (UniqueName: \"kubernetes.io/projected/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-kube-api-access-nqxvx\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.387202 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3" (UID: "a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.393188 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-config-data" (OuterVolumeSpecName: "config-data") pod "a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3" (UID: "a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.443012 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.443378 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.547972 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xlhhb" event={"ID":"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3","Type":"ContainerDied","Data":"46779b8951b31f9858ffd66ac6e32f691ea2a94f077b82226673a024b7efc699"} Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.548100 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xlhhb" Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.548644 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46779b8951b31f9858ffd66ac6e32f691ea2a94f077b82226673a024b7efc699" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.125952 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-r249m"] Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.126491 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" podUID="f7535aa4-5a5e-4663-b9c5-7822d0836660" containerName="dnsmasq-dns" containerID="cri-o://54d52bbf972f2c68c46beb0620a95b30135d78a71e1e999b8b262f72fafa7a37" gracePeriod=10 Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.186936 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-4g4n5"] Feb 03 10:24:42 crc kubenswrapper[5010]: E0203 10:24:42.193041 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="378ea53a-1006-4116-a56d-7c466c494224" containerName="init" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.193138 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="378ea53a-1006-4116-a56d-7c466c494224" containerName="init" Feb 03 10:24:42 crc kubenswrapper[5010]: E0203 10:24:42.193175 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3" containerName="glance-db-sync" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.193191 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3" containerName="glance-db-sync" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.193695 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="378ea53a-1006-4116-a56d-7c466c494224" containerName="init" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.193726 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3" containerName="glance-db-sync" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.201936 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.281784 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6548998769-npmxc"] Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.295002 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.295168 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.295809 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.296034 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.296082 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgjdv\" (UniqueName: \"kubernetes.io/projected/6195408a-292f-4e66-84a7-5007ba24c702-kube-api-access-bgjdv\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.296509 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-config\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.370020 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-4g4n5"] Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.401799 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.402884 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.404444 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.404594 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.404626 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgjdv\" (UniqueName: \"kubernetes.io/projected/6195408a-292f-4e66-84a7-5007ba24c702-kube-api-access-bgjdv\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.404666 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-config\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.409451 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-config\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.411639 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.412504 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.413880 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.414633 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.446589 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7cdcd56868-k9h7g"] Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.454916 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.464358 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.512248 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-scripts\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.512372 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-config-data\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.512421 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-logs\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.512461 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-horizon-secret-key\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.512497 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-horizon-tls-certs\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.512542 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnlnb\" (UniqueName: \"kubernetes.io/projected/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-kube-api-access-mnlnb\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.512562 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-combined-ca-bundle\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.548766 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgjdv\" (UniqueName: \"kubernetes.io/projected/6195408a-292f-4e66-84a7-5007ba24c702-kube-api-access-bgjdv\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.578114 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cdcd56868-k9h7g"] Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.578181 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b5b4c5ff-x859r"] Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.582697 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.584993 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.594041 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.594438 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mtbjz" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.595177 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.597464 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.613725 5010 generic.go:334] "Generic (PLEG): container finished" podID="f7535aa4-5a5e-4663-b9c5-7822d0836660" containerID="54d52bbf972f2c68c46beb0620a95b30135d78a71e1e999b8b262f72fafa7a37" exitCode=0 Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.613814 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" event={"ID":"f7535aa4-5a5e-4663-b9c5-7822d0836660","Type":"ContainerDied","Data":"54d52bbf972f2c68c46beb0620a95b30135d78a71e1e999b8b262f72fafa7a37"} Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.614839 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-config-data\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.614912 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.614943 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e731f56b-df87-43c2-9b58-dcb496df80c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.615164 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e731f56b-df87-43c2-9b58-dcb496df80c9-logs\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.615241 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-logs\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.615375 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-horizon-secret-key\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.615459 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.615502 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6776\" (UniqueName: \"kubernetes.io/projected/e731f56b-df87-43c2-9b58-dcb496df80c9-kube-api-access-q6776\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.615529 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.615666 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-horizon-tls-certs\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.615719 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnlnb\" (UniqueName: \"kubernetes.io/projected/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-kube-api-access-mnlnb\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.615742 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-combined-ca-bundle\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.615809 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.615833 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-scripts\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.620717 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-logs\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.623170 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-scripts\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.623235 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.625630 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-config-data\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.642047 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-combined-ca-bundle\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.642702 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-horizon-tls-certs\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.643578 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-horizon-secret-key\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.654383 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnlnb\" (UniqueName: \"kubernetes.io/projected/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-kube-api-access-mnlnb\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.665237 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6cc988db4-2mpfb"] Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.667113 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.708106 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cc988db4-2mpfb"] Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.718756 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fedcc57-b16c-4177-a10e-f627269b4adb-combined-ca-bundle\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.718844 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.718875 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2fedcc57-b16c-4177-a10e-f627269b4adb-config-data\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.718902 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e731f56b-df87-43c2-9b58-dcb496df80c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.719067 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2fedcc57-b16c-4177-a10e-f627269b4adb-horizon-secret-key\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.719092 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2fedcc57-b16c-4177-a10e-f627269b4adb-scripts\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.719124 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e731f56b-df87-43c2-9b58-dcb496df80c9-logs\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.719158 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.719180 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6776\" (UniqueName: \"kubernetes.io/projected/e731f56b-df87-43c2-9b58-dcb496df80c9-kube-api-access-q6776\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.719201 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.719280 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fedcc57-b16c-4177-a10e-f627269b4adb-horizon-tls-certs\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.719307 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.719342 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6scb\" (UniqueName: \"kubernetes.io/projected/2fedcc57-b16c-4177-a10e-f627269b4adb-kube-api-access-t6scb\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.719384 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fedcc57-b16c-4177-a10e-f627269b4adb-logs\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.721578 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e731f56b-df87-43c2-9b58-dcb496df80c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.721944 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e731f56b-df87-43c2-9b58-dcb496df80c9-logs\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.722689 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.725840 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.738759 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.740125 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.742881 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6776\" (UniqueName: \"kubernetes.io/projected/e731f56b-df87-43c2-9b58-dcb496df80c9-kube-api-access-q6776\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.790250 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.802739 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.803281 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.823740 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fedcc57-b16c-4177-a10e-f627269b4adb-logs\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.823891 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fedcc57-b16c-4177-a10e-f627269b4adb-combined-ca-bundle\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.823962 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2fedcc57-b16c-4177-a10e-f627269b4adb-config-data\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.823991 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2fedcc57-b16c-4177-a10e-f627269b4adb-horizon-secret-key\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.824009 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2fedcc57-b16c-4177-a10e-f627269b4adb-scripts\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.824138 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fedcc57-b16c-4177-a10e-f627269b4adb-horizon-tls-certs\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.824196 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6scb\" (UniqueName: \"kubernetes.io/projected/2fedcc57-b16c-4177-a10e-f627269b4adb-kube-api-access-t6scb\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.825266 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fedcc57-b16c-4177-a10e-f627269b4adb-logs\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.826561 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2fedcc57-b16c-4177-a10e-f627269b4adb-scripts\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.828459 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2fedcc57-b16c-4177-a10e-f627269b4adb-config-data\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.833093 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2fedcc57-b16c-4177-a10e-f627269b4adb-horizon-secret-key\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.833378 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fedcc57-b16c-4177-a10e-f627269b4adb-combined-ca-bundle\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.839583 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fedcc57-b16c-4177-a10e-f627269b4adb-horizon-tls-certs\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.847069 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6scb\" (UniqueName: \"kubernetes.io/projected/2fedcc57-b16c-4177-a10e-f627269b4adb-kube-api-access-t6scb\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.123019 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.562360 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.564583 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.578043 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.586801 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.652089 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c01a7e05-aa67-4606-9a08-c7a91dd9b332-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.652195 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.652309 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c01a7e05-aa67-4606-9a08-c7a91dd9b332-logs\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.652419 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.652679 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhwkv\" (UniqueName: \"kubernetes.io/projected/c01a7e05-aa67-4606-9a08-c7a91dd9b332-kube-api-access-qhwkv\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.652784 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.652867 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.754359 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c01a7e05-aa67-4606-9a08-c7a91dd9b332-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.754434 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.754486 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c01a7e05-aa67-4606-9a08-c7a91dd9b332-logs\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.754541 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.754585 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhwkv\" (UniqueName: \"kubernetes.io/projected/c01a7e05-aa67-4606-9a08-c7a91dd9b332-kube-api-access-qhwkv\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.754614 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.754648 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.755127 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c01a7e05-aa67-4606-9a08-c7a91dd9b332-logs\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.755833 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c01a7e05-aa67-4606-9a08-c7a91dd9b332-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.755897 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.770299 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.771026 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.775158 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.777739 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhwkv\" (UniqueName: \"kubernetes.io/projected/c01a7e05-aa67-4606-9a08-c7a91dd9b332-kube-api-access-qhwkv\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.872049 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.897091 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 10:24:45 crc kubenswrapper[5010]: I0203 10:24:45.171971 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" podUID="f7535aa4-5a5e-4663-b9c5-7822d0836660" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: connect: connection refused" Feb 03 10:24:45 crc kubenswrapper[5010]: I0203 10:24:45.693339 5010 generic.go:334] "Generic (PLEG): container finished" podID="1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba" containerID="284a769b3c25b0cdea9e5ddf661cc8aed190c024694193ebf7516c57518d0765" exitCode=0 Feb 03 10:24:45 crc kubenswrapper[5010]: I0203 10:24:45.693425 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7w6tr" event={"ID":"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba","Type":"ContainerDied","Data":"284a769b3c25b0cdea9e5ddf661cc8aed190c024694193ebf7516c57518d0765"} Feb 03 10:24:46 crc kubenswrapper[5010]: I0203 10:24:46.390293 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:24:46 crc kubenswrapper[5010]: I0203 10:24:46.390370 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:24:46 crc kubenswrapper[5010]: I0203 10:24:46.390424 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:24:46 crc kubenswrapper[5010]: I0203 10:24:46.391495 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"feb6be59c5f60eb4fb5b49379a30e3d1c2e1212fd73c563908d470b35420da88"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:24:46 crc kubenswrapper[5010]: I0203 10:24:46.391569 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://feb6be59c5f60eb4fb5b49379a30e3d1c2e1212fd73c563908d470b35420da88" gracePeriod=600 Feb 03 10:24:46 crc kubenswrapper[5010]: I0203 10:24:46.500078 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:24:46 crc kubenswrapper[5010]: I0203 10:24:46.587849 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:24:46 crc kubenswrapper[5010]: I0203 10:24:46.707602 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="feb6be59c5f60eb4fb5b49379a30e3d1c2e1212fd73c563908d470b35420da88" exitCode=0 Feb 03 10:24:46 crc kubenswrapper[5010]: I0203 10:24:46.707839 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"feb6be59c5f60eb4fb5b49379a30e3d1c2e1212fd73c563908d470b35420da88"} Feb 03 10:24:46 crc kubenswrapper[5010]: I0203 10:24:46.707878 5010 scope.go:117] "RemoveContainer" containerID="221f195b125299df734f26b3fd40fd966d81cfff3c339b70c815feda6a5e1f4b" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.574772 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.625408 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-credential-keys\") pod \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.625582 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-config-data\") pod \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.625636 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-scripts\") pod \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.625760 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-fernet-keys\") pod \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.625868 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-combined-ca-bundle\") pod \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.625948 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw5x7\" (UniqueName: \"kubernetes.io/projected/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-kube-api-access-lw5x7\") pod \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.634553 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-scripts" (OuterVolumeSpecName: "scripts") pod "1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba" (UID: "1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.634588 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba" (UID: "1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.638168 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-kube-api-access-lw5x7" (OuterVolumeSpecName: "kube-api-access-lw5x7") pod "1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba" (UID: "1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba"). InnerVolumeSpecName "kube-api-access-lw5x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.649302 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba" (UID: "1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.655690 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-config-data" (OuterVolumeSpecName: "config-data") pod "1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba" (UID: "1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.668692 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba" (UID: "1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.728736 5010 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.728990 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.729059 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.729116 5010 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.729172 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.729256 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw5x7\" (UniqueName: \"kubernetes.io/projected/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-kube-api-access-lw5x7\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.793761 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7w6tr" event={"ID":"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba","Type":"ContainerDied","Data":"d2dbdaf7c4fb793e606130a48124449992f37d61583b140dcfaf7dbb8bb3f1d2"} Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.794132 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2dbdaf7c4fb793e606130a48124449992f37d61583b140dcfaf7dbb8bb3f1d2" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.793798 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.677766 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-7w6tr"] Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.691159 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-7w6tr"] Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.774501 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-swx9t"] Feb 03 10:24:54 crc kubenswrapper[5010]: E0203 10:24:54.775289 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba" containerName="keystone-bootstrap" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.775308 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba" containerName="keystone-bootstrap" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.775520 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba" containerName="keystone-bootstrap" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.776230 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.779515 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.779525 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.779696 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xdhtt" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.779830 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.780092 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.793480 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-swx9t"] Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.847468 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-config-data\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.847579 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-fernet-keys\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.847648 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-combined-ca-bundle\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.847721 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk8xc\" (UniqueName: \"kubernetes.io/projected/457510b3-7c5a-456d-9df3-54fa7dee8c4b-kube-api-access-jk8xc\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.847750 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-scripts\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.847774 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-credential-keys\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.958307 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-combined-ca-bundle\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.958408 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk8xc\" (UniqueName: \"kubernetes.io/projected/457510b3-7c5a-456d-9df3-54fa7dee8c4b-kube-api-access-jk8xc\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.958445 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-scripts\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.958475 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-credential-keys\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.958504 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-config-data\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.958562 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-fernet-keys\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.962941 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-scripts\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.963329 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-combined-ca-bundle\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.964492 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-credential-keys\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.976230 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-fernet-keys\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.978735 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk8xc\" (UniqueName: \"kubernetes.io/projected/457510b3-7c5a-456d-9df3-54fa7dee8c4b-kube-api-access-jk8xc\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.981499 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-config-data\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:55 crc kubenswrapper[5010]: I0203 10:24:55.097002 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:55 crc kubenswrapper[5010]: I0203 10:24:55.168114 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" podUID="f7535aa4-5a5e-4663-b9c5-7822d0836660" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: i/o timeout" Feb 03 10:24:55 crc kubenswrapper[5010]: E0203 10:24:55.745428 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 03 10:24:55 crc kubenswrapper[5010]: E0203 10:24:55.745615 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wcm2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-tptfc_openstack(29ef610c-3c09-4b27-9b97-3a5350388caa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:24:55 crc kubenswrapper[5010]: E0203 10:24:55.747605 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-tptfc" podUID="29ef610c-3c09-4b27-9b97-3a5350388caa" Feb 03 10:24:55 crc kubenswrapper[5010]: E0203 10:24:55.784120 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 03 10:24:55 crc kubenswrapper[5010]: E0203 10:24:55.784384 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb7h566h8bh56bh5d8h594h5bh58fh4h5b8h8dh9h6dhb6h98h5fdh8chb6hdch688h5b6h5c7hcbh5f6h64fhd5h5f7h686h4h59hcfh597q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qxtzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-57c9d98597-wmwqg_openstack(7f771bc6-23e3-4382-89ea-f773805f789c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:24:55 crc kubenswrapper[5010]: E0203 10:24:55.787255 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-57c9d98597-wmwqg" podUID="7f771bc6-23e3-4382-89ea-f773805f789c" Feb 03 10:24:55 crc kubenswrapper[5010]: E0203 10:24:55.796519 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 03 10:24:55 crc kubenswrapper[5010]: E0203 10:24:55.796748 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n696h5ddh57dh5fbhfchc7h685h57h66h66ch5bdh698h65bh5c8h5bdh56h597h697h654h66fhb4h557h6fh575h57ch56fhfh594h6fh8ch65h587q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cr86z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6548998769-npmxc_openstack(2f7faa93-7520-4d4b-b153-ed311effd90b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:24:55 crc kubenswrapper[5010]: E0203 10:24:55.799782 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6548998769-npmxc" podUID="2f7faa93-7520-4d4b-b153-ed311effd90b" Feb 03 10:24:55 crc kubenswrapper[5010]: E0203 10:24:55.819473 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-tptfc" podUID="29ef610c-3c09-4b27-9b97-3a5350388caa" Feb 03 10:24:56 crc kubenswrapper[5010]: I0203 10:24:56.516320 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba" path="/var/lib/kubelet/pods/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba/volumes" Feb 03 10:24:58 crc kubenswrapper[5010]: E0203 10:24:58.145541 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 03 10:24:58 crc kubenswrapper[5010]: E0203 10:24:58.146073 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n694h95h78h5d4h558h554h7ch96h589h5ddh545hbch57fh5f7hdfhc6h656h5f8h8fh658h68bh589h5c9h4h577h5cbh5cfh5fh545h68h66bh59q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4rmrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(4338eb03-3ad6-4d68-8d8a-a37694aff6d7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.238345 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.419200 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hhmq\" (UniqueName: \"kubernetes.io/projected/f7535aa4-5a5e-4663-b9c5-7822d0836660-kube-api-access-4hhmq\") pod \"f7535aa4-5a5e-4663-b9c5-7822d0836660\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.419294 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-ovsdbserver-sb\") pod \"f7535aa4-5a5e-4663-b9c5-7822d0836660\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.419360 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-dns-svc\") pod \"f7535aa4-5a5e-4663-b9c5-7822d0836660\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.419394 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-dns-swift-storage-0\") pod \"f7535aa4-5a5e-4663-b9c5-7822d0836660\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.419417 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-config\") pod \"f7535aa4-5a5e-4663-b9c5-7822d0836660\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.419526 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-ovsdbserver-nb\") pod \"f7535aa4-5a5e-4663-b9c5-7822d0836660\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.429666 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7535aa4-5a5e-4663-b9c5-7822d0836660-kube-api-access-4hhmq" (OuterVolumeSpecName: "kube-api-access-4hhmq") pod "f7535aa4-5a5e-4663-b9c5-7822d0836660" (UID: "f7535aa4-5a5e-4663-b9c5-7822d0836660"). InnerVolumeSpecName "kube-api-access-4hhmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.469111 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f7535aa4-5a5e-4663-b9c5-7822d0836660" (UID: "f7535aa4-5a5e-4663-b9c5-7822d0836660"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.469780 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f7535aa4-5a5e-4663-b9c5-7822d0836660" (UID: "f7535aa4-5a5e-4663-b9c5-7822d0836660"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.469906 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f7535aa4-5a5e-4663-b9c5-7822d0836660" (UID: "f7535aa4-5a5e-4663-b9c5-7822d0836660"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.472778 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-config" (OuterVolumeSpecName: "config") pod "f7535aa4-5a5e-4663-b9c5-7822d0836660" (UID: "f7535aa4-5a5e-4663-b9c5-7822d0836660"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.497052 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f7535aa4-5a5e-4663-b9c5-7822d0836660" (UID: "f7535aa4-5a5e-4663-b9c5-7822d0836660"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.521605 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.521642 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.521653 5010 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.521662 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.521675 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.521687 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hhmq\" (UniqueName: \"kubernetes.io/projected/f7535aa4-5a5e-4663-b9c5-7822d0836660-kube-api-access-4hhmq\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.861627 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" event={"ID":"f7535aa4-5a5e-4663-b9c5-7822d0836660","Type":"ContainerDied","Data":"9c2ae9a172420144ce552f204613ad111ecce479d2e000586e38710bc90ab902"} Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.861694 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.888173 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-r249m"] Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.898452 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-r249m"] Feb 03 10:25:00 crc kubenswrapper[5010]: I0203 10:25:00.168447 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" podUID="f7535aa4-5a5e-4663-b9c5-7822d0836660" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: i/o timeout" Feb 03 10:25:00 crc kubenswrapper[5010]: I0203 10:25:00.513645 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7535aa4-5a5e-4663-b9c5-7822d0836660" path="/var/lib/kubelet/pods/f7535aa4-5a5e-4663-b9c5-7822d0836660/volumes" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.762029 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6548998769-npmxc" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.772669 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.892578 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f771bc6-23e3-4382-89ea-f773805f789c-config-data\") pod \"7f771bc6-23e3-4382-89ea-f773805f789c\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.892620 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr86z\" (UniqueName: \"kubernetes.io/projected/2f7faa93-7520-4d4b-b153-ed311effd90b-kube-api-access-cr86z\") pod \"2f7faa93-7520-4d4b-b153-ed311effd90b\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.892669 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f7faa93-7520-4d4b-b153-ed311effd90b-horizon-secret-key\") pod \"2f7faa93-7520-4d4b-b153-ed311effd90b\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.892696 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f7faa93-7520-4d4b-b153-ed311effd90b-config-data\") pod \"2f7faa93-7520-4d4b-b153-ed311effd90b\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.892794 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f771bc6-23e3-4382-89ea-f773805f789c-scripts\") pod \"7f771bc6-23e3-4382-89ea-f773805f789c\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.892842 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f7faa93-7520-4d4b-b153-ed311effd90b-logs\") pod \"2f7faa93-7520-4d4b-b153-ed311effd90b\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.892881 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f7faa93-7520-4d4b-b153-ed311effd90b-scripts\") pod \"2f7faa93-7520-4d4b-b153-ed311effd90b\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.892950 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxtzm\" (UniqueName: \"kubernetes.io/projected/7f771bc6-23e3-4382-89ea-f773805f789c-kube-api-access-qxtzm\") pod \"7f771bc6-23e3-4382-89ea-f773805f789c\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.892977 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7f771bc6-23e3-4382-89ea-f773805f789c-horizon-secret-key\") pod \"7f771bc6-23e3-4382-89ea-f773805f789c\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.892998 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f771bc6-23e3-4382-89ea-f773805f789c-logs\") pod \"7f771bc6-23e3-4382-89ea-f773805f789c\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.893710 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f771bc6-23e3-4382-89ea-f773805f789c-scripts" (OuterVolumeSpecName: "scripts") pod "7f771bc6-23e3-4382-89ea-f773805f789c" (UID: "7f771bc6-23e3-4382-89ea-f773805f789c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.893932 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f771bc6-23e3-4382-89ea-f773805f789c-config-data" (OuterVolumeSpecName: "config-data") pod "7f771bc6-23e3-4382-89ea-f773805f789c" (UID: "7f771bc6-23e3-4382-89ea-f773805f789c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.894392 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f7faa93-7520-4d4b-b153-ed311effd90b-logs" (OuterVolumeSpecName: "logs") pod "2f7faa93-7520-4d4b-b153-ed311effd90b" (UID: "2f7faa93-7520-4d4b-b153-ed311effd90b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.894438 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f7faa93-7520-4d4b-b153-ed311effd90b-scripts" (OuterVolumeSpecName: "scripts") pod "2f7faa93-7520-4d4b-b153-ed311effd90b" (UID: "2f7faa93-7520-4d4b-b153-ed311effd90b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.894906 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f771bc6-23e3-4382-89ea-f773805f789c-logs" (OuterVolumeSpecName: "logs") pod "7f771bc6-23e3-4382-89ea-f773805f789c" (UID: "7f771bc6-23e3-4382-89ea-f773805f789c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.894925 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f7faa93-7520-4d4b-b153-ed311effd90b-config-data" (OuterVolumeSpecName: "config-data") pod "2f7faa93-7520-4d4b-b153-ed311effd90b" (UID: "2f7faa93-7520-4d4b-b153-ed311effd90b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.901409 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f771bc6-23e3-4382-89ea-f773805f789c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7f771bc6-23e3-4382-89ea-f773805f789c" (UID: "7f771bc6-23e3-4382-89ea-f773805f789c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.902086 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f771bc6-23e3-4382-89ea-f773805f789c-kube-api-access-qxtzm" (OuterVolumeSpecName: "kube-api-access-qxtzm") pod "7f771bc6-23e3-4382-89ea-f773805f789c" (UID: "7f771bc6-23e3-4382-89ea-f773805f789c"). InnerVolumeSpecName "kube-api-access-qxtzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.902166 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f7faa93-7520-4d4b-b153-ed311effd90b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2f7faa93-7520-4d4b-b153-ed311effd90b" (UID: "2f7faa93-7520-4d4b-b153-ed311effd90b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.904620 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f7faa93-7520-4d4b-b153-ed311effd90b-kube-api-access-cr86z" (OuterVolumeSpecName: "kube-api-access-cr86z") pod "2f7faa93-7520-4d4b-b153-ed311effd90b" (UID: "2f7faa93-7520-4d4b-b153-ed311effd90b"). InnerVolumeSpecName "kube-api-access-cr86z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.932929 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6548998769-npmxc" event={"ID":"2f7faa93-7520-4d4b-b153-ed311effd90b","Type":"ContainerDied","Data":"b292b07f4a535a045b80c60269a48c9544e180d091d0068c00e312baf2b8ddb0"} Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.933017 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6548998769-npmxc" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.938997 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57c9d98597-wmwqg" event={"ID":"7f771bc6-23e3-4382-89ea-f773805f789c","Type":"ContainerDied","Data":"96801178c0f60b1be70f5a00384d47d9cf626976ce906ad24548febe89fb7fc8"} Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.939037 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.000183 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f7faa93-7520-4d4b-b153-ed311effd90b-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.000330 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f771bc6-23e3-4382-89ea-f773805f789c-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.000344 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f7faa93-7520-4d4b-b153-ed311effd90b-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.000356 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f7faa93-7520-4d4b-b153-ed311effd90b-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.000367 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxtzm\" (UniqueName: \"kubernetes.io/projected/7f771bc6-23e3-4382-89ea-f773805f789c-kube-api-access-qxtzm\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.000384 5010 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7f771bc6-23e3-4382-89ea-f773805f789c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.000398 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f771bc6-23e3-4382-89ea-f773805f789c-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.000410 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f771bc6-23e3-4382-89ea-f773805f789c-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.000423 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr86z\" (UniqueName: \"kubernetes.io/projected/2f7faa93-7520-4d4b-b153-ed311effd90b-kube-api-access-cr86z\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.000434 5010 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f7faa93-7520-4d4b-b153-ed311effd90b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.089396 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6548998769-npmxc"] Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.098419 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6548998769-npmxc"] Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.118203 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-57c9d98597-wmwqg"] Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.125853 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-57c9d98597-wmwqg"] Feb 03 10:25:07 crc kubenswrapper[5010]: E0203 10:25:07.378349 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 03 10:25:07 crc kubenswrapper[5010]: E0203 10:25:07.378512 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6l7tp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-g6tdx_openstack(bad34e68-b20a-486c-b06b-e19f5aaaf917): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:25:07 crc kubenswrapper[5010]: E0203 10:25:07.379764 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-g6tdx" podUID="bad34e68-b20a-486c-b06b-e19f5aaaf917" Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.961194 5010 generic.go:334] "Generic (PLEG): container finished" podID="5c2a4fab-65d6-47ac-9829-2b5b5e8d412c" containerID="2f477c6764bb977e8cc3e17e43a92a85fa737e9bdd4ffa07901f030c855e03b4" exitCode=0 Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.961251 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mvrf4" event={"ID":"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c","Type":"ContainerDied","Data":"2f477c6764bb977e8cc3e17e43a92a85fa737e9bdd4ffa07901f030c855e03b4"} Feb 03 10:25:07 crc kubenswrapper[5010]: E0203 10:25:07.965248 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-g6tdx" podUID="bad34e68-b20a-486c-b06b-e19f5aaaf917" Feb 03 10:25:08 crc kubenswrapper[5010]: I0203 10:25:08.414145 5010 scope.go:117] "RemoveContainer" containerID="54d52bbf972f2c68c46beb0620a95b30135d78a71e1e999b8b262f72fafa7a37" Feb 03 10:25:08 crc kubenswrapper[5010]: E0203 10:25:08.443542 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 03 10:25:08 crc kubenswrapper[5010]: E0203 10:25:08.444195 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f846k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-b9wwp_openstack(1acc33e7-f3ae-4131-a003-aa6b592269c6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:25:08 crc kubenswrapper[5010]: E0203 10:25:08.445593 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-b9wwp" podUID="1acc33e7-f3ae-4131-a003-aa6b592269c6" Feb 03 10:25:08 crc kubenswrapper[5010]: I0203 10:25:08.519488 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f7faa93-7520-4d4b-b153-ed311effd90b" path="/var/lib/kubelet/pods/2f7faa93-7520-4d4b-b153-ed311effd90b/volumes" Feb 03 10:25:08 crc kubenswrapper[5010]: I0203 10:25:08.520342 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f771bc6-23e3-4382-89ea-f773805f789c" path="/var/lib/kubelet/pods/7f771bc6-23e3-4382-89ea-f773805f789c/volumes" Feb 03 10:25:08 crc kubenswrapper[5010]: I0203 10:25:08.857774 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cdcd56868-k9h7g"] Feb 03 10:25:08 crc kubenswrapper[5010]: I0203 10:25:08.897325 5010 scope.go:117] "RemoveContainer" containerID="86940200a0f167ad56e8101970695c50456840462697eef05dc72062b5c839d7" Feb 03 10:25:08 crc kubenswrapper[5010]: W0203 10:25:08.900201 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e6ce46b_7ed7_48c5_a09c_cb39ec7bf34b.slice/crio-df9fac7aaf04d2b9be17b46f0957ab58bf3f75ddd22ffd12e196051104d34ede WatchSource:0}: Error finding container df9fac7aaf04d2b9be17b46f0957ab58bf3f75ddd22ffd12e196051104d34ede: Status 404 returned error can't find the container with id df9fac7aaf04d2b9be17b46f0957ab58bf3f75ddd22ffd12e196051104d34ede Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.012260 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cdcd56868-k9h7g" event={"ID":"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b","Type":"ContainerStarted","Data":"df9fac7aaf04d2b9be17b46f0957ab58bf3f75ddd22ffd12e196051104d34ede"} Feb 03 10:25:09 crc kubenswrapper[5010]: E0203 10:25:09.061544 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-b9wwp" podUID="1acc33e7-f3ae-4131-a003-aa6b592269c6" Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.070566 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:25:09 crc kubenswrapper[5010]: W0203 10:25:09.327757 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fedcc57_b16c_4177_a10e_f627269b4adb.slice/crio-76388283145b5851ac3db3834097f01fb292268a133c5db4f83b3ead8c57274d WatchSource:0}: Error finding container 76388283145b5851ac3db3834097f01fb292268a133c5db4f83b3ead8c57274d: Status 404 returned error can't find the container with id 76388283145b5851ac3db3834097f01fb292268a133c5db4f83b3ead8c57274d Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.333252 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cc988db4-2mpfb"] Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.443775 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.560961 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdlkh\" (UniqueName: \"kubernetes.io/projected/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-kube-api-access-tdlkh\") pod \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\" (UID: \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\") " Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.561056 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-combined-ca-bundle\") pod \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\" (UID: \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\") " Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.561148 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-config\") pod \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\" (UID: \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\") " Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.581567 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-kube-api-access-tdlkh" (OuterVolumeSpecName: "kube-api-access-tdlkh") pod "5c2a4fab-65d6-47ac-9829-2b5b5e8d412c" (UID: "5c2a4fab-65d6-47ac-9829-2b5b5e8d412c"). InnerVolumeSpecName "kube-api-access-tdlkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.654381 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.663351 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdlkh\" (UniqueName: \"kubernetes.io/projected/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-kube-api-access-tdlkh\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.665916 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-swx9t"] Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.688798 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-4g4n5"] Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.717193 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-config" (OuterVolumeSpecName: "config") pod "5c2a4fab-65d6-47ac-9829-2b5b5e8d412c" (UID: "5c2a4fab-65d6-47ac-9829-2b5b5e8d412c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.767322 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.896974 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c2a4fab-65d6-47ac-9829-2b5b5e8d412c" (UID: "5c2a4fab-65d6-47ac-9829-2b5b5e8d412c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.970740 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.043853 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4338eb03-3ad6-4d68-8d8a-a37694aff6d7","Type":"ContainerStarted","Data":"d91d141426317acd31c21e9040c1e38df0008cc513ccacd6d4ecf8718788f6f7"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.053461 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cdcd56868-k9h7g" event={"ID":"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b","Type":"ContainerStarted","Data":"2cc2ce22d6ea86e28f6eb264d0d9c9e725b7685d6ab0fd02531064a6b9b028b0"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.053522 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cdcd56868-k9h7g" event={"ID":"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b","Type":"ContainerStarted","Data":"d39b7b37971eb5d63b6cabefb740041e4cc9cc6265fc84bc4b6ff52605291d6a"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.066177 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c01a7e05-aa67-4606-9a08-c7a91dd9b332","Type":"ContainerStarted","Data":"d4d81e3a7705c11b3d4b432eac5a8a598f0ea28d2b2cfb774c5c3a7b63578142"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.069585 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" event={"ID":"6195408a-292f-4e66-84a7-5007ba24c702","Type":"ContainerStarted","Data":"e28ff655fe84bd57493957f3f09a3080ab17c5d462a3b8177036f3153667da0d"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.084369 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7cdcd56868-k9h7g" podStartSLOduration=28.084350373 podStartE2EDuration="28.084350373s" podCreationTimestamp="2026-02-03 10:24:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:10.079523679 +0000 UTC m=+1380.235499808" watchObservedRunningTime="2026-02-03 10:25:10.084350373 +0000 UTC m=+1380.240326502" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.091504 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e731f56b-df87-43c2-9b58-dcb496df80c9","Type":"ContainerStarted","Data":"09d80471a02be8b08b6c00cb53adbc75820f62dbcbe1bed30472a593dcfe57cb"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.166698 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc988db4-2mpfb" event={"ID":"2fedcc57-b16c-4177-a10e-f627269b4adb","Type":"ContainerStarted","Data":"1d7ecd8900f582370f2aa2ea7d17e98fbb53211402ee75abd7707475bb689f68"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.167366 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc988db4-2mpfb" event={"ID":"2fedcc57-b16c-4177-a10e-f627269b4adb","Type":"ContainerStarted","Data":"76388283145b5851ac3db3834097f01fb292268a133c5db4f83b3ead8c57274d"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.200510 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-swx9t" event={"ID":"457510b3-7c5a-456d-9df3-54fa7dee8c4b","Type":"ContainerStarted","Data":"9bb617f937270e1fe6e444469ff83627ed35fc24df5672358eff75f2893f7693"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.268020 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b5b4c5ff-x859r" event={"ID":"716318b2-6f04-4ff9-94c2-e107ebf51cb6","Type":"ContainerStarted","Data":"1e0c0b172a23175ded34e25aee553cea1577eb12ecd614b67b01f55633483ef4"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.268111 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b5b4c5ff-x859r" event={"ID":"716318b2-6f04-4ff9-94c2-e107ebf51cb6","Type":"ContainerStarted","Data":"5ec57a7e44cc0f82c124057f7268cf9e4686f96d4ca8ba657715ac39cccda8e4"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.268369 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b5b4c5ff-x859r" podUID="716318b2-6f04-4ff9-94c2-e107ebf51cb6" containerName="horizon-log" containerID="cri-o://5ec57a7e44cc0f82c124057f7268cf9e4686f96d4ca8ba657715ac39cccda8e4" gracePeriod=30 Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.269203 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b5b4c5ff-x859r" podUID="716318b2-6f04-4ff9-94c2-e107ebf51cb6" containerName="horizon" containerID="cri-o://1e0c0b172a23175ded34e25aee553cea1577eb12ecd614b67b01f55633483ef4" gracePeriod=30 Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.302633 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mvrf4" event={"ID":"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c","Type":"ContainerDied","Data":"2b0073ad8287411e1d59389e4452039e032d8e37832a1112a2e60a18196d8ae0"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.306887 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.313400 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b0073ad8287411e1d59389e4452039e032d8e37832a1112a2e60a18196d8ae0" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.353127 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.387573 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5b5b4c5ff-x859r" podStartSLOduration=4.422412738 podStartE2EDuration="34.38754674s" podCreationTimestamp="2026-02-03 10:24:36 +0000 UTC" firstStartedPulling="2026-02-03 10:24:37.415592094 +0000 UTC m=+1347.571568223" lastFinishedPulling="2026-02-03 10:25:07.380726096 +0000 UTC m=+1377.536702225" observedRunningTime="2026-02-03 10:25:10.320354867 +0000 UTC m=+1380.476331016" watchObservedRunningTime="2026-02-03 10:25:10.38754674 +0000 UTC m=+1380.543522869" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.390978 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-4g4n5"] Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.426256 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-867995856-hbnv9"] Feb 03 10:25:10 crc kubenswrapper[5010]: E0203 10:25:10.426700 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7535aa4-5a5e-4663-b9c5-7822d0836660" containerName="init" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.426714 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7535aa4-5a5e-4663-b9c5-7822d0836660" containerName="init" Feb 03 10:25:10 crc kubenswrapper[5010]: E0203 10:25:10.426726 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7535aa4-5a5e-4663-b9c5-7822d0836660" containerName="dnsmasq-dns" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.426733 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7535aa4-5a5e-4663-b9c5-7822d0836660" containerName="dnsmasq-dns" Feb 03 10:25:10 crc kubenswrapper[5010]: E0203 10:25:10.426751 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c2a4fab-65d6-47ac-9829-2b5b5e8d412c" containerName="neutron-db-sync" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.426758 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c2a4fab-65d6-47ac-9829-2b5b5e8d412c" containerName="neutron-db-sync" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.426939 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7535aa4-5a5e-4663-b9c5-7822d0836660" containerName="dnsmasq-dns" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.426961 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c2a4fab-65d6-47ac-9829-2b5b5e8d412c" containerName="neutron-db-sync" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.429057 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.434069 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.434181 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.434800 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-867995856-hbnv9"] Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.437374 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.437387 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-j789z" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.496145 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-config\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.496307 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-httpd-config\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.496440 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-combined-ca-bundle\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.496500 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkvkc\" (UniqueName: \"kubernetes.io/projected/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-kube-api-access-mkvkc\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.496575 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-ovndb-tls-certs\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.499141 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-v4m78"] Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.501790 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: E0203 10:25:10.580114 5010 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c2a4fab_65d6_47ac_9829_2b5b5e8d412c.slice/crio-2b0073ad8287411e1d59389e4452039e032d8e37832a1112a2e60a18196d8ae0\": RecentStats: unable to find data in memory cache]" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.598077 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-combined-ca-bundle\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.598255 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkvkc\" (UniqueName: \"kubernetes.io/projected/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-kube-api-access-mkvkc\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.598365 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.598471 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.598658 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.598755 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-ovndb-tls-certs\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.598834 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-config\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.598965 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-config\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.599143 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-httpd-config\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.599485 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-dns-svc\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.599645 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54blj\" (UniqueName: \"kubernetes.io/projected/800c4356-da72-47c4-9a83-5eeceacc7211-kube-api-access-54blj\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.605548 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.605925 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.621225 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-combined-ca-bundle\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.629763 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-ovndb-tls-certs\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.653786 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-httpd-config\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.654338 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkvkc\" (UniqueName: \"kubernetes.io/projected/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-kube-api-access-mkvkc\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.655908 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-v4m78"] Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.656016 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-config\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.740667 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.741532 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.741834 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-config\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.742337 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-dns-svc\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.742391 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54blj\" (UniqueName: \"kubernetes.io/projected/800c4356-da72-47c4-9a83-5eeceacc7211-kube-api-access-54blj\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.742755 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.745830 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.748078 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.748618 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.752703 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-dns-svc\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.766154 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-config\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.820842 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54blj\" (UniqueName: \"kubernetes.io/projected/800c4356-da72-47c4-9a83-5eeceacc7211-kube-api-access-54blj\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.933426 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.973037 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:11 crc kubenswrapper[5010]: I0203 10:25:11.386773 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tptfc" event={"ID":"29ef610c-3c09-4b27-9b97-3a5350388caa","Type":"ContainerStarted","Data":"9f5dffa42b9c5fba57b57a1ca0e358ff317d50df295683f9bc9e42abb84b1b81"} Feb 03 10:25:11 crc kubenswrapper[5010]: I0203 10:25:11.410842 5010 generic.go:334] "Generic (PLEG): container finished" podID="6195408a-292f-4e66-84a7-5007ba24c702" containerID="379ab01e67ed33eb16a52d733d3fa47b3bc67d903a473cca21c2a2fbf2a80135" exitCode=0 Feb 03 10:25:11 crc kubenswrapper[5010]: I0203 10:25:11.410932 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" event={"ID":"6195408a-292f-4e66-84a7-5007ba24c702","Type":"ContainerDied","Data":"379ab01e67ed33eb16a52d733d3fa47b3bc67d903a473cca21c2a2fbf2a80135"} Feb 03 10:25:11 crc kubenswrapper[5010]: I0203 10:25:11.411549 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-tptfc" podStartSLOduration=5.38555528 podStartE2EDuration="38.411532018s" podCreationTimestamp="2026-02-03 10:24:33 +0000 UTC" firstStartedPulling="2026-02-03 10:24:37.143292014 +0000 UTC m=+1347.299268143" lastFinishedPulling="2026-02-03 10:25:10.169268752 +0000 UTC m=+1380.325244881" observedRunningTime="2026-02-03 10:25:11.407138614 +0000 UTC m=+1381.563114753" watchObservedRunningTime="2026-02-03 10:25:11.411532018 +0000 UTC m=+1381.567508237" Feb 03 10:25:11 crc kubenswrapper[5010]: I0203 10:25:11.432071 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e731f56b-df87-43c2-9b58-dcb496df80c9","Type":"ContainerStarted","Data":"8a5453edee79c0d75e7ddeabeb025c5dee661893de0985e382bb10724d267f76"} Feb 03 10:25:11 crc kubenswrapper[5010]: I0203 10:25:11.440069 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc988db4-2mpfb" event={"ID":"2fedcc57-b16c-4177-a10e-f627269b4adb","Type":"ContainerStarted","Data":"45c56002ab101b0e77fc5934aa412e9d50c3e636af770ec4fe10888a673e7f7e"} Feb 03 10:25:11 crc kubenswrapper[5010]: I0203 10:25:11.482109 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6cc988db4-2mpfb" podStartSLOduration=29.482086406 podStartE2EDuration="29.482086406s" podCreationTimestamp="2026-02-03 10:24:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:11.470932869 +0000 UTC m=+1381.626909018" watchObservedRunningTime="2026-02-03 10:25:11.482086406 +0000 UTC m=+1381.638062545" Feb 03 10:25:11 crc kubenswrapper[5010]: I0203 10:25:11.488232 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-swx9t" event={"ID":"457510b3-7c5a-456d-9df3-54fa7dee8c4b","Type":"ContainerStarted","Data":"eec510d597d8f2314ae76e8de6136bb5224447e6e83068a025a8dfed4080a04f"} Feb 03 10:25:11 crc kubenswrapper[5010]: I0203 10:25:11.519041 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-swx9t" podStartSLOduration=17.519021829 podStartE2EDuration="17.519021829s" podCreationTimestamp="2026-02-03 10:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:11.517260613 +0000 UTC m=+1381.673236762" watchObservedRunningTime="2026-02-03 10:25:11.519021829 +0000 UTC m=+1381.674997958" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.092270 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-v4m78"] Feb 03 10:25:12 crc kubenswrapper[5010]: W0203 10:25:12.212790 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod800c4356_da72_47c4_9a83_5eeceacc7211.slice/crio-a39cc9b17b280be33534b557e14c9c1d9f99cb76acef07ae259bc5d74339aa49 WatchSource:0}: Error finding container a39cc9b17b280be33534b557e14c9c1d9f99cb76acef07ae259bc5d74339aa49: Status 404 returned error can't find the container with id a39cc9b17b280be33534b557e14c9c1d9f99cb76acef07ae259bc5d74339aa49 Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.272591 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-867995856-hbnv9"] Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.493955 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.575539 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" event={"ID":"800c4356-da72-47c4-9a83-5eeceacc7211","Type":"ContainerStarted","Data":"a39cc9b17b280be33534b557e14c9c1d9f99cb76acef07ae259bc5d74339aa49"} Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.578160 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" event={"ID":"6195408a-292f-4e66-84a7-5007ba24c702","Type":"ContainerDied","Data":"e28ff655fe84bd57493957f3f09a3080ab17c5d462a3b8177036f3153667da0d"} Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.578231 5010 scope.go:117] "RemoveContainer" containerID="379ab01e67ed33eb16a52d733d3fa47b3bc67d903a473cca21c2a2fbf2a80135" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.578382 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.594998 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgjdv\" (UniqueName: \"kubernetes.io/projected/6195408a-292f-4e66-84a7-5007ba24c702-kube-api-access-bgjdv\") pod \"6195408a-292f-4e66-84a7-5007ba24c702\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.595052 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-config\") pod \"6195408a-292f-4e66-84a7-5007ba24c702\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.595089 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-dns-svc\") pod \"6195408a-292f-4e66-84a7-5007ba24c702\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.595140 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-ovsdbserver-nb\") pod \"6195408a-292f-4e66-84a7-5007ba24c702\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.595226 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-dns-swift-storage-0\") pod \"6195408a-292f-4e66-84a7-5007ba24c702\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.595266 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-ovsdbserver-sb\") pod \"6195408a-292f-4e66-84a7-5007ba24c702\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.597197 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-867995856-hbnv9" event={"ID":"ec3f26b1-ee88-47b4-80d5-f281aa85c00d","Type":"ContainerStarted","Data":"5d57a17f6b627eededa0a21aa0ef2051ab13fadb63e9a5ef111d5cb1f8d96193"} Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.611952 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c01a7e05-aa67-4606-9a08-c7a91dd9b332","Type":"ContainerStarted","Data":"6700db575ba245cd84da8dd0d6b288edc79eb5817a450848a4a630c96ccb0a97"} Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.769176 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6195408a-292f-4e66-84a7-5007ba24c702-kube-api-access-bgjdv" (OuterVolumeSpecName: "kube-api-access-bgjdv") pod "6195408a-292f-4e66-84a7-5007ba24c702" (UID: "6195408a-292f-4e66-84a7-5007ba24c702"). InnerVolumeSpecName "kube-api-access-bgjdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.805741 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.807063 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.838281 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6195408a-292f-4e66-84a7-5007ba24c702" (UID: "6195408a-292f-4e66-84a7-5007ba24c702"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.840777 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6195408a-292f-4e66-84a7-5007ba24c702" (UID: "6195408a-292f-4e66-84a7-5007ba24c702"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.871016 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.873969 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgjdv\" (UniqueName: \"kubernetes.io/projected/6195408a-292f-4e66-84a7-5007ba24c702-kube-api-access-bgjdv\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.874008 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.906472 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6195408a-292f-4e66-84a7-5007ba24c702" (UID: "6195408a-292f-4e66-84a7-5007ba24c702"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.914785 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6195408a-292f-4e66-84a7-5007ba24c702" (UID: "6195408a-292f-4e66-84a7-5007ba24c702"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.976147 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.976176 5010 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.053248 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-config" (OuterVolumeSpecName: "config") pod "6195408a-292f-4e66-84a7-5007ba24c702" (UID: "6195408a-292f-4e66-84a7-5007ba24c702"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.079748 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.124939 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.124997 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.220954 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-58c5b6f6cc-94dq7"] Feb 03 10:25:13 crc kubenswrapper[5010]: E0203 10:25:13.222164 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6195408a-292f-4e66-84a7-5007ba24c702" containerName="init" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.222184 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="6195408a-292f-4e66-84a7-5007ba24c702" containerName="init" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.222761 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="6195408a-292f-4e66-84a7-5007ba24c702" containerName="init" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.224627 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.248483 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.248747 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.267788 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58c5b6f6cc-94dq7"] Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.298036 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-4g4n5"] Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.319208 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-4g4n5"] Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.388863 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-httpd-config\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.388980 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-config\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.389017 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-combined-ca-bundle\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.389088 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-ovndb-tls-certs\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.389128 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-public-tls-certs\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.389163 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnx67\" (UniqueName: \"kubernetes.io/projected/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-kube-api-access-bnx67\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.389256 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-internal-tls-certs\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.491943 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-httpd-config\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.492031 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-config\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.492056 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-combined-ca-bundle\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.492102 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-ovndb-tls-certs\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.492126 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-public-tls-certs\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.492151 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnx67\" (UniqueName: \"kubernetes.io/projected/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-kube-api-access-bnx67\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.492198 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-internal-tls-certs\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.498017 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-httpd-config\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.498578 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-ovndb-tls-certs\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.500305 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-public-tls-certs\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.503799 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-internal-tls-certs\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.509714 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-combined-ca-bundle\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.514472 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-config\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.521947 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnx67\" (UniqueName: \"kubernetes.io/projected/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-kube-api-access-bnx67\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.610991 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.523188 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6195408a-292f-4e66-84a7-5007ba24c702" path="/var/lib/kubelet/pods/6195408a-292f-4e66-84a7-5007ba24c702/volumes" Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.649283 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58c5b6f6cc-94dq7"] Feb 03 10:25:14 crc kubenswrapper[5010]: W0203 10:25:14.656290 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31521b0f_9e4f_4cfc_b0e8_e9e2bd2ca688.slice/crio-b27f611dc82e161f85b167c99dbce2d08eedaac7c3dd33e70725328f6c7d0a68 WatchSource:0}: Error finding container b27f611dc82e161f85b167c99dbce2d08eedaac7c3dd33e70725328f6c7d0a68: Status 404 returned error can't find the container with id b27f611dc82e161f85b167c99dbce2d08eedaac7c3dd33e70725328f6c7d0a68 Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.663375 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c01a7e05-aa67-4606-9a08-c7a91dd9b332","Type":"ContainerStarted","Data":"04f1ed0eb618ead4dfd5e192e6cbd45c7a42c68a8906bfc9878f7864e6544b0e"} Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.663558 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c01a7e05-aa67-4606-9a08-c7a91dd9b332" containerName="glance-log" containerID="cri-o://6700db575ba245cd84da8dd0d6b288edc79eb5817a450848a4a630c96ccb0a97" gracePeriod=30 Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.667138 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c01a7e05-aa67-4606-9a08-c7a91dd9b332" containerName="glance-httpd" containerID="cri-o://04f1ed0eb618ead4dfd5e192e6cbd45c7a42c68a8906bfc9878f7864e6544b0e" gracePeriod=30 Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.727456 5010 generic.go:334] "Generic (PLEG): container finished" podID="800c4356-da72-47c4-9a83-5eeceacc7211" containerID="e300605267e4f1076a4841165415138776a8cf13a2c4a8aef99e228176fdb314" exitCode=0 Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.727662 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" event={"ID":"800c4356-da72-47c4-9a83-5eeceacc7211","Type":"ContainerDied","Data":"e300605267e4f1076a4841165415138776a8cf13a2c4a8aef99e228176fdb314"} Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.730825 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=32.730798846 podStartE2EDuration="32.730798846s" podCreationTimestamp="2026-02-03 10:24:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:14.726950207 +0000 UTC m=+1384.882926336" watchObservedRunningTime="2026-02-03 10:25:14.730798846 +0000 UTC m=+1384.886774975" Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.780865 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e731f56b-df87-43c2-9b58-dcb496df80c9","Type":"ContainerStarted","Data":"b4e4a1e6a2630ad64ab7d63e96ac55cace7d3a6b86ca6cfcc1a22bf419376de0"} Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.781160 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e731f56b-df87-43c2-9b58-dcb496df80c9" containerName="glance-log" containerID="cri-o://8a5453edee79c0d75e7ddeabeb025c5dee661893de0985e382bb10724d267f76" gracePeriod=30 Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.781671 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e731f56b-df87-43c2-9b58-dcb496df80c9" containerName="glance-httpd" containerID="cri-o://b4e4a1e6a2630ad64ab7d63e96ac55cace7d3a6b86ca6cfcc1a22bf419376de0" gracePeriod=30 Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.789941 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-867995856-hbnv9" event={"ID":"ec3f26b1-ee88-47b4-80d5-f281aa85c00d","Type":"ContainerStarted","Data":"13a99ef6826ee2239f9e033be19a6f4c730512b38fb4cc1caa87b9ad6b5789db"} Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.789992 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-867995856-hbnv9" event={"ID":"ec3f26b1-ee88-47b4-80d5-f281aa85c00d","Type":"ContainerStarted","Data":"61b9f09360bad3b65b22af3bd28bc767427a951a1f75a5674af55a31458394a9"} Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.790081 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.834959 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=32.83492333 podStartE2EDuration="32.83492333s" podCreationTimestamp="2026-02-03 10:24:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:14.815603062 +0000 UTC m=+1384.971579191" watchObservedRunningTime="2026-02-03 10:25:14.83492333 +0000 UTC m=+1384.990899459" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.228635 5010 generic.go:334] "Generic (PLEG): container finished" podID="c01a7e05-aa67-4606-9a08-c7a91dd9b332" containerID="04f1ed0eb618ead4dfd5e192e6cbd45c7a42c68a8906bfc9878f7864e6544b0e" exitCode=0 Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.261505 5010 generic.go:334] "Generic (PLEG): container finished" podID="c01a7e05-aa67-4606-9a08-c7a91dd9b332" containerID="6700db575ba245cd84da8dd0d6b288edc79eb5817a450848a4a630c96ccb0a97" exitCode=143 Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.232346 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c01a7e05-aa67-4606-9a08-c7a91dd9b332","Type":"ContainerDied","Data":"04f1ed0eb618ead4dfd5e192e6cbd45c7a42c68a8906bfc9878f7864e6544b0e"} Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.262163 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c01a7e05-aa67-4606-9a08-c7a91dd9b332","Type":"ContainerDied","Data":"6700db575ba245cd84da8dd0d6b288edc79eb5817a450848a4a630c96ccb0a97"} Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.325607 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58c5b6f6cc-94dq7" event={"ID":"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688","Type":"ContainerStarted","Data":"f95d5f955943f1d6179b138d89e148c3a26347690a24c1fd2737b1cfd76d3955"} Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.326034 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58c5b6f6cc-94dq7" event={"ID":"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688","Type":"ContainerStarted","Data":"b27f611dc82e161f85b167c99dbce2d08eedaac7c3dd33e70725328f6c7d0a68"} Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.346084 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" event={"ID":"800c4356-da72-47c4-9a83-5eeceacc7211","Type":"ContainerStarted","Data":"d1764054e077cd4256f8f822597e57237fec354ad2e79a0451fb06420764c4a9"} Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.346924 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.372396 5010 generic.go:334] "Generic (PLEG): container finished" podID="e731f56b-df87-43c2-9b58-dcb496df80c9" containerID="b4e4a1e6a2630ad64ab7d63e96ac55cace7d3a6b86ca6cfcc1a22bf419376de0" exitCode=0 Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.372455 5010 generic.go:334] "Generic (PLEG): container finished" podID="e731f56b-df87-43c2-9b58-dcb496df80c9" containerID="8a5453edee79c0d75e7ddeabeb025c5dee661893de0985e382bb10724d267f76" exitCode=143 Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.373981 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e731f56b-df87-43c2-9b58-dcb496df80c9","Type":"ContainerDied","Data":"b4e4a1e6a2630ad64ab7d63e96ac55cace7d3a6b86ca6cfcc1a22bf419376de0"} Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.374049 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e731f56b-df87-43c2-9b58-dcb496df80c9","Type":"ContainerDied","Data":"8a5453edee79c0d75e7ddeabeb025c5dee661893de0985e382bb10724d267f76"} Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.397278 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" podStartSLOduration=6.397209114 podStartE2EDuration="6.397209114s" podCreationTimestamp="2026-02-03 10:25:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:16.394610727 +0000 UTC m=+1386.550586856" watchObservedRunningTime="2026-02-03 10:25:16.397209114 +0000 UTC m=+1386.553185243" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.400578 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-867995856-hbnv9" podStartSLOduration=6.40055684 podStartE2EDuration="6.40055684s" podCreationTimestamp="2026-02-03 10:25:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:14.859140804 +0000 UTC m=+1385.015116943" watchObservedRunningTime="2026-02-03 10:25:16.40055684 +0000 UTC m=+1386.556532979" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.513594 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.595013 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-combined-ca-bundle\") pod \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.595134 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhwkv\" (UniqueName: \"kubernetes.io/projected/c01a7e05-aa67-4606-9a08-c7a91dd9b332-kube-api-access-qhwkv\") pod \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.595195 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-scripts\") pod \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.595395 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c01a7e05-aa67-4606-9a08-c7a91dd9b332-logs\") pod \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.595766 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-config-data\") pod \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.595928 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c01a7e05-aa67-4606-9a08-c7a91dd9b332-httpd-run\") pod \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.596094 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.608109 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c01a7e05-aa67-4606-9a08-c7a91dd9b332-logs" (OuterVolumeSpecName: "logs") pod "c01a7e05-aa67-4606-9a08-c7a91dd9b332" (UID: "c01a7e05-aa67-4606-9a08-c7a91dd9b332"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.630432 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c01a7e05-aa67-4606-9a08-c7a91dd9b332-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c01a7e05-aa67-4606-9a08-c7a91dd9b332" (UID: "c01a7e05-aa67-4606-9a08-c7a91dd9b332"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.637474 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-scripts" (OuterVolumeSpecName: "scripts") pod "c01a7e05-aa67-4606-9a08-c7a91dd9b332" (UID: "c01a7e05-aa67-4606-9a08-c7a91dd9b332"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.680545 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c01a7e05-aa67-4606-9a08-c7a91dd9b332-kube-api-access-qhwkv" (OuterVolumeSpecName: "kube-api-access-qhwkv") pod "c01a7e05-aa67-4606-9a08-c7a91dd9b332" (UID: "c01a7e05-aa67-4606-9a08-c7a91dd9b332"). InnerVolumeSpecName "kube-api-access-qhwkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.680712 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "c01a7e05-aa67-4606-9a08-c7a91dd9b332" (UID: "c01a7e05-aa67-4606-9a08-c7a91dd9b332"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.707499 5010 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c01a7e05-aa67-4606-9a08-c7a91dd9b332-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.707826 5010 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.707887 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhwkv\" (UniqueName: \"kubernetes.io/projected/c01a7e05-aa67-4606-9a08-c7a91dd9b332-kube-api-access-qhwkv\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.707944 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.708006 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c01a7e05-aa67-4606-9a08-c7a91dd9b332-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.737747 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c01a7e05-aa67-4606-9a08-c7a91dd9b332" (UID: "c01a7e05-aa67-4606-9a08-c7a91dd9b332"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.738154 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.783590 5010 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.811934 5010 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.812000 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:16.821672 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-config-data" (OuterVolumeSpecName: "config-data") pod "c01a7e05-aa67-4606-9a08-c7a91dd9b332" (UID: "c01a7e05-aa67-4606-9a08-c7a91dd9b332"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.203254 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.230288 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-combined-ca-bundle\") pod \"e731f56b-df87-43c2-9b58-dcb496df80c9\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.230443 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-config-data\") pod \"e731f56b-df87-43c2-9b58-dcb496df80c9\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.230642 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6776\" (UniqueName: \"kubernetes.io/projected/e731f56b-df87-43c2-9b58-dcb496df80c9-kube-api-access-q6776\") pod \"e731f56b-df87-43c2-9b58-dcb496df80c9\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.230688 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"e731f56b-df87-43c2-9b58-dcb496df80c9\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.230734 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e731f56b-df87-43c2-9b58-dcb496df80c9-httpd-run\") pod \"e731f56b-df87-43c2-9b58-dcb496df80c9\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.230768 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-scripts\") pod \"e731f56b-df87-43c2-9b58-dcb496df80c9\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.230845 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e731f56b-df87-43c2-9b58-dcb496df80c9-logs\") pod \"e731f56b-df87-43c2-9b58-dcb496df80c9\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.231429 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.232843 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e731f56b-df87-43c2-9b58-dcb496df80c9-logs" (OuterVolumeSpecName: "logs") pod "e731f56b-df87-43c2-9b58-dcb496df80c9" (UID: "e731f56b-df87-43c2-9b58-dcb496df80c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.233273 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e731f56b-df87-43c2-9b58-dcb496df80c9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e731f56b-df87-43c2-9b58-dcb496df80c9" (UID: "e731f56b-df87-43c2-9b58-dcb496df80c9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.238487 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e731f56b-df87-43c2-9b58-dcb496df80c9-kube-api-access-q6776" (OuterVolumeSpecName: "kube-api-access-q6776") pod "e731f56b-df87-43c2-9b58-dcb496df80c9" (UID: "e731f56b-df87-43c2-9b58-dcb496df80c9"). InnerVolumeSpecName "kube-api-access-q6776". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.242684 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-scripts" (OuterVolumeSpecName: "scripts") pod "e731f56b-df87-43c2-9b58-dcb496df80c9" (UID: "e731f56b-df87-43c2-9b58-dcb496df80c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.258787 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "e731f56b-df87-43c2-9b58-dcb496df80c9" (UID: "e731f56b-df87-43c2-9b58-dcb496df80c9"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.322875 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e731f56b-df87-43c2-9b58-dcb496df80c9" (UID: "e731f56b-df87-43c2-9b58-dcb496df80c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.331627 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-config-data" (OuterVolumeSpecName: "config-data") pod "e731f56b-df87-43c2-9b58-dcb496df80c9" (UID: "e731f56b-df87-43c2-9b58-dcb496df80c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.335808 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.337655 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.337745 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6776\" (UniqueName: \"kubernetes.io/projected/e731f56b-df87-43c2-9b58-dcb496df80c9-kube-api-access-q6776\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.337852 5010 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.337974 5010 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e731f56b-df87-43c2-9b58-dcb496df80c9-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.338036 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.338111 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e731f56b-df87-43c2-9b58-dcb496df80c9-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.393815 5010 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.433921 5010 generic.go:334] "Generic (PLEG): container finished" podID="29ef610c-3c09-4b27-9b97-3a5350388caa" containerID="9f5dffa42b9c5fba57b57a1ca0e358ff317d50df295683f9bc9e42abb84b1b81" exitCode=0 Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.434096 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tptfc" event={"ID":"29ef610c-3c09-4b27-9b97-3a5350388caa","Type":"ContainerDied","Data":"9f5dffa42b9c5fba57b57a1ca0e358ff317d50df295683f9bc9e42abb84b1b81"} Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.443808 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e731f56b-df87-43c2-9b58-dcb496df80c9","Type":"ContainerDied","Data":"09d80471a02be8b08b6c00cb53adbc75820f62dbcbe1bed30472a593dcfe57cb"} Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.444176 5010 scope.go:117] "RemoveContainer" containerID="b4e4a1e6a2630ad64ab7d63e96ac55cace7d3a6b86ca6cfcc1a22bf419376de0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.444525 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.447152 5010 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.456101 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c01a7e05-aa67-4606-9a08-c7a91dd9b332","Type":"ContainerDied","Data":"d4d81e3a7705c11b3d4b432eac5a8a598f0ea28d2b2cfb774c5c3a7b63578142"} Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.456409 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.498222 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58c5b6f6cc-94dq7" event={"ID":"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688","Type":"ContainerStarted","Data":"e0894a68073b3bd07b800e9f0879ea84ca668a89746cac6928280bad0a28dded"} Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.499527 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.539591 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-58c5b6f6cc-94dq7" podStartSLOduration=4.539554783 podStartE2EDuration="4.539554783s" podCreationTimestamp="2026-02-03 10:25:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:17.52235622 +0000 UTC m=+1387.678332359" watchObservedRunningTime="2026-02-03 10:25:17.539554783 +0000 UTC m=+1387.695530922" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.625652 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.647872 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.685917 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.707956 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.712657 5010 scope.go:117] "RemoveContainer" containerID="8a5453edee79c0d75e7ddeabeb025c5dee661893de0985e382bb10724d267f76" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.731321 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:25:17 crc kubenswrapper[5010]: E0203 10:25:17.732137 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c01a7e05-aa67-4606-9a08-c7a91dd9b332" containerName="glance-httpd" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.732171 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="c01a7e05-aa67-4606-9a08-c7a91dd9b332" containerName="glance-httpd" Feb 03 10:25:17 crc kubenswrapper[5010]: E0203 10:25:17.732195 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c01a7e05-aa67-4606-9a08-c7a91dd9b332" containerName="glance-log" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.732205 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="c01a7e05-aa67-4606-9a08-c7a91dd9b332" containerName="glance-log" Feb 03 10:25:17 crc kubenswrapper[5010]: E0203 10:25:17.732242 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e731f56b-df87-43c2-9b58-dcb496df80c9" containerName="glance-log" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.732252 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="e731f56b-df87-43c2-9b58-dcb496df80c9" containerName="glance-log" Feb 03 10:25:17 crc kubenswrapper[5010]: E0203 10:25:17.732268 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e731f56b-df87-43c2-9b58-dcb496df80c9" containerName="glance-httpd" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.732276 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="e731f56b-df87-43c2-9b58-dcb496df80c9" containerName="glance-httpd" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.732537 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="c01a7e05-aa67-4606-9a08-c7a91dd9b332" containerName="glance-httpd" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.732608 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="c01a7e05-aa67-4606-9a08-c7a91dd9b332" containerName="glance-log" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.732654 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="e731f56b-df87-43c2-9b58-dcb496df80c9" containerName="glance-log" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.732668 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="e731f56b-df87-43c2-9b58-dcb496df80c9" containerName="glance-httpd" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.733949 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.741420 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.741634 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.741816 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.741975 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mtbjz" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.744375 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.754184 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.758747 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.762734 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.764488 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.764565 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.764628 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.764655 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.764683 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8d327288-f34e-4766-b3f6-b52b5c985d7d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.764723 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ddcb\" (UniqueName: \"kubernetes.io/projected/8d327288-f34e-4766-b3f6-b52b5c985d7d-kube-api-access-8ddcb\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.764814 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d327288-f34e-4766-b3f6-b52b5c985d7d-logs\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.764853 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.764876 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.767871 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.779643 5010 scope.go:117] "RemoveContainer" containerID="04f1ed0eb618ead4dfd5e192e6cbd45c7a42c68a8906bfc9878f7864e6544b0e" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.874395 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.874521 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.874557 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8d327288-f34e-4766-b3f6-b52b5c985d7d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.874606 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ddcb\" (UniqueName: \"kubernetes.io/projected/8d327288-f34e-4766-b3f6-b52b5c985d7d-kube-api-access-8ddcb\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.874674 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d327288-f34e-4766-b3f6-b52b5c985d7d-logs\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.874726 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.874745 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.874797 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.877188 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d327288-f34e-4766-b3f6-b52b5c985d7d-logs\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.877468 5010 scope.go:117] "RemoveContainer" containerID="6700db575ba245cd84da8dd0d6b288edc79eb5817a450848a4a630c96ccb0a97" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.878175 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.881314 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8d327288-f34e-4766-b3f6-b52b5c985d7d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.896520 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.898087 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.914192 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.917117 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ddcb\" (UniqueName: \"kubernetes.io/projected/8d327288-f34e-4766-b3f6-b52b5c985d7d-kube-api-access-8ddcb\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.917538 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.925532 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.980099 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ef87127-760d-4f81-8a78-a06d074c7ec3-logs\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.980352 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.980530 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-scripts\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.980591 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.980713 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v84sf\" (UniqueName: \"kubernetes.io/projected/3ef87127-760d-4f81-8a78-a06d074c7ec3-kube-api-access-v84sf\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.980751 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ef87127-760d-4f81-8a78-a06d074c7ec3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.980789 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.980823 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-config-data\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.064637 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.083128 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-config-data\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.083262 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ef87127-760d-4f81-8a78-a06d074c7ec3-logs\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.083329 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.083491 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-scripts\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.083533 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.083614 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v84sf\" (UniqueName: \"kubernetes.io/projected/3ef87127-760d-4f81-8a78-a06d074c7ec3-kube-api-access-v84sf\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.083668 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ef87127-760d-4f81-8a78-a06d074c7ec3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.083739 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.084049 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.085089 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ef87127-760d-4f81-8a78-a06d074c7ec3-logs\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.089174 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ef87127-760d-4f81-8a78-a06d074c7ec3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.092418 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-scripts\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.098415 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.103305 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.105064 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-config-data\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.121051 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v84sf\" (UniqueName: \"kubernetes.io/projected/3ef87127-760d-4f81-8a78-a06d074c7ec3-kube-api-access-v84sf\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.152547 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.504370 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.544503 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c01a7e05-aa67-4606-9a08-c7a91dd9b332" path="/var/lib/kubelet/pods/c01a7e05-aa67-4606-9a08-c7a91dd9b332/volumes" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.546412 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e731f56b-df87-43c2-9b58-dcb496df80c9" path="/var/lib/kubelet/pods/e731f56b-df87-43c2-9b58-dcb496df80c9/volumes" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.858374 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.373489 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tptfc" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.379323 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:25:19 crc kubenswrapper[5010]: W0203 10:25:19.407964 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ef87127_760d_4f81_8a78_a06d074c7ec3.slice/crio-6bd4ac18ae915fc96ca9ce387172eccabbebfdb18cd09371727e5b54df8c7288 WatchSource:0}: Error finding container 6bd4ac18ae915fc96ca9ce387172eccabbebfdb18cd09371727e5b54df8c7288: Status 404 returned error can't find the container with id 6bd4ac18ae915fc96ca9ce387172eccabbebfdb18cd09371727e5b54df8c7288 Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.489695 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-combined-ca-bundle\") pod \"29ef610c-3c09-4b27-9b97-3a5350388caa\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.489788 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcm2f\" (UniqueName: \"kubernetes.io/projected/29ef610c-3c09-4b27-9b97-3a5350388caa-kube-api-access-wcm2f\") pod \"29ef610c-3c09-4b27-9b97-3a5350388caa\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.489890 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29ef610c-3c09-4b27-9b97-3a5350388caa-logs\") pod \"29ef610c-3c09-4b27-9b97-3a5350388caa\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.489932 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-config-data\") pod \"29ef610c-3c09-4b27-9b97-3a5350388caa\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.489993 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-scripts\") pod \"29ef610c-3c09-4b27-9b97-3a5350388caa\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.492818 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29ef610c-3c09-4b27-9b97-3a5350388caa-logs" (OuterVolumeSpecName: "logs") pod "29ef610c-3c09-4b27-9b97-3a5350388caa" (UID: "29ef610c-3c09-4b27-9b97-3a5350388caa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.511022 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-scripts" (OuterVolumeSpecName: "scripts") pod "29ef610c-3c09-4b27-9b97-3a5350388caa" (UID: "29ef610c-3c09-4b27-9b97-3a5350388caa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.511336 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29ef610c-3c09-4b27-9b97-3a5350388caa-kube-api-access-wcm2f" (OuterVolumeSpecName: "kube-api-access-wcm2f") pod "29ef610c-3c09-4b27-9b97-3a5350388caa" (UID: "29ef610c-3c09-4b27-9b97-3a5350388caa"). InnerVolumeSpecName "kube-api-access-wcm2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.558145 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-config-data" (OuterVolumeSpecName: "config-data") pod "29ef610c-3c09-4b27-9b97-3a5350388caa" (UID: "29ef610c-3c09-4b27-9b97-3a5350388caa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.584795 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29ef610c-3c09-4b27-9b97-3a5350388caa" (UID: "29ef610c-3c09-4b27-9b97-3a5350388caa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.595458 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.595553 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcm2f\" (UniqueName: \"kubernetes.io/projected/29ef610c-3c09-4b27-9b97-3a5350388caa-kube-api-access-wcm2f\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.595572 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29ef610c-3c09-4b27-9b97-3a5350388caa-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.595585 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.595596 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.659675 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8d327288-f34e-4766-b3f6-b52b5c985d7d","Type":"ContainerStarted","Data":"1764b6a93e3f3ed5e01b4b46981d2b3555284f7ada6ea1b560610775c21c68d5"} Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.665695 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tptfc" event={"ID":"29ef610c-3c09-4b27-9b97-3a5350388caa","Type":"ContainerDied","Data":"8dff0c755a50d3ce83f3790da9a77abbdd3719d09b62bae731558162867118c1"} Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.665759 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dff0c755a50d3ce83f3790da9a77abbdd3719d09b62bae731558162867118c1" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.665767 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tptfc" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.681548 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3ef87127-760d-4f81-8a78-a06d074c7ec3","Type":"ContainerStarted","Data":"6bd4ac18ae915fc96ca9ce387172eccabbebfdb18cd09371727e5b54df8c7288"} Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.743804 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7f744c8944-2zwzr"] Feb 03 10:25:19 crc kubenswrapper[5010]: E0203 10:25:19.744500 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ef610c-3c09-4b27-9b97-3a5350388caa" containerName="placement-db-sync" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.744522 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ef610c-3c09-4b27-9b97-3a5350388caa" containerName="placement-db-sync" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.744747 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ef610c-3c09-4b27-9b97-3a5350388caa" containerName="placement-db-sync" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.746140 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.753953 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.754330 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.754507 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.755041 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dtdfs" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.755086 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.765782 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7f744c8944-2zwzr"] Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.902842 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-combined-ca-bundle\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.902917 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj8c4\" (UniqueName: \"kubernetes.io/projected/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-kube-api-access-rj8c4\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.902969 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-public-tls-certs\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.903018 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-internal-tls-certs\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.903049 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-scripts\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.903079 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-config-data\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.903139 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-logs\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.368102 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-combined-ca-bundle\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.377142 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj8c4\" (UniqueName: \"kubernetes.io/projected/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-kube-api-access-rj8c4\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.377363 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-public-tls-certs\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.377519 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-internal-tls-certs\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.377605 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-scripts\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.385974 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-combined-ca-bundle\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.390494 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-internal-tls-certs\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.393963 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-public-tls-certs\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.397565 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-config-data\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.397754 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-logs\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.398420 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-logs\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.421976 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-scripts\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.435236 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-config-data\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.444203 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj8c4\" (UniqueName: \"kubernetes.io/projected/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-kube-api-access-rj8c4\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.688832 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.721017 5010 generic.go:334] "Generic (PLEG): container finished" podID="457510b3-7c5a-456d-9df3-54fa7dee8c4b" containerID="eec510d597d8f2314ae76e8de6136bb5224447e6e83068a025a8dfed4080a04f" exitCode=0 Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.720930 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-swx9t" event={"ID":"457510b3-7c5a-456d-9df3-54fa7dee8c4b","Type":"ContainerDied","Data":"eec510d597d8f2314ae76e8de6136bb5224447e6e83068a025a8dfed4080a04f"} Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.978517 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.071008 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-tpx4x"] Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.071429 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" podUID="9eb55fd4-6f97-47c3-bd98-89ca6331cf88" containerName="dnsmasq-dns" containerID="cri-o://c9a7cc65c09b93f157cada4e0c074bf50be6834a16b4169ebac2602a35731c7e" gracePeriod=10 Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.378976 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7f744c8944-2zwzr"] Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.552604 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.796056 5010 generic.go:334] "Generic (PLEG): container finished" podID="9eb55fd4-6f97-47c3-bd98-89ca6331cf88" containerID="c9a7cc65c09b93f157cada4e0c074bf50be6834a16b4169ebac2602a35731c7e" exitCode=0 Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.796737 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" event={"ID":"9eb55fd4-6f97-47c3-bd98-89ca6331cf88","Type":"ContainerDied","Data":"c9a7cc65c09b93f157cada4e0c074bf50be6834a16b4169ebac2602a35731c7e"} Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.803992 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8d327288-f34e-4766-b3f6-b52b5c985d7d","Type":"ContainerStarted","Data":"d96c848085855a1aab0bb15f4dcb25d155e8b02a76c2309a7e985e9edc63c08c"} Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.812687 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3ef87127-760d-4f81-8a78-a06d074c7ec3","Type":"ContainerStarted","Data":"55bbb2cde20dfdcd53e2ce462c09a9714ec6a75aaad1416462255a0ed6efb0a8"} Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.822501 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f744c8944-2zwzr" event={"ID":"8d6356a1-c07c-4d04-8d48-7f13a822ddf5","Type":"ContainerStarted","Data":"089e9b9bfea0632f8dc13a626391ff9a317374bb6a62f576e2749c15e06ebc0d"} Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.853008 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.907192 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-dns-swift-storage-0\") pod \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.907361 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-dns-svc\") pod \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.907420 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-config\") pod \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.907487 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-ovsdbserver-nb\") pod \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.907533 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-ovsdbserver-sb\") pod \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.907592 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqnmc\" (UniqueName: \"kubernetes.io/projected/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-kube-api-access-zqnmc\") pod \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.924655 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-kube-api-access-zqnmc" (OuterVolumeSpecName: "kube-api-access-zqnmc") pod "9eb55fd4-6f97-47c3-bd98-89ca6331cf88" (UID: "9eb55fd4-6f97-47c3-bd98-89ca6331cf88"). InnerVolumeSpecName "kube-api-access-zqnmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.015756 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqnmc\" (UniqueName: \"kubernetes.io/projected/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-kube-api-access-zqnmc\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.076712 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9eb55fd4-6f97-47c3-bd98-89ca6331cf88" (UID: "9eb55fd4-6f97-47c3-bd98-89ca6331cf88"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.118442 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.397568 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9eb55fd4-6f97-47c3-bd98-89ca6331cf88" (UID: "9eb55fd4-6f97-47c3-bd98-89ca6331cf88"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.405616 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9eb55fd4-6f97-47c3-bd98-89ca6331cf88" (UID: "9eb55fd4-6f97-47c3-bd98-89ca6331cf88"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.412171 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-config" (OuterVolumeSpecName: "config") pod "9eb55fd4-6f97-47c3-bd98-89ca6331cf88" (UID: "9eb55fd4-6f97-47c3-bd98-89ca6331cf88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.421917 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9eb55fd4-6f97-47c3-bd98-89ca6331cf88" (UID: "9eb55fd4-6f97-47c3-bd98-89ca6331cf88"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.437897 5010 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.437944 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.437960 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.437974 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.716025 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.806261 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cdcd56868-k9h7g" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.852392 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk8xc\" (UniqueName: \"kubernetes.io/projected/457510b3-7c5a-456d-9df3-54fa7dee8c4b-kube-api-access-jk8xc\") pod \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.852534 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-fernet-keys\") pod \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.852580 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-scripts\") pod \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.852611 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-config-data\") pod \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.852795 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-combined-ca-bundle\") pod \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.852845 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-credential-keys\") pod \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.868081 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f744c8944-2zwzr" event={"ID":"8d6356a1-c07c-4d04-8d48-7f13a822ddf5","Type":"ContainerStarted","Data":"68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8"} Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.872976 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-scripts" (OuterVolumeSpecName: "scripts") pod "457510b3-7c5a-456d-9df3-54fa7dee8c4b" (UID: "457510b3-7c5a-456d-9df3-54fa7dee8c4b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.877582 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/457510b3-7c5a-456d-9df3-54fa7dee8c4b-kube-api-access-jk8xc" (OuterVolumeSpecName: "kube-api-access-jk8xc") pod "457510b3-7c5a-456d-9df3-54fa7dee8c4b" (UID: "457510b3-7c5a-456d-9df3-54fa7dee8c4b"). InnerVolumeSpecName "kube-api-access-jk8xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.879713 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" event={"ID":"9eb55fd4-6f97-47c3-bd98-89ca6331cf88","Type":"ContainerDied","Data":"93d0e004e008b5e1b05321fcaf14211b090b2038acd1b389851fdfc6ab3c1331"} Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.879780 5010 scope.go:117] "RemoveContainer" containerID="c9a7cc65c09b93f157cada4e0c074bf50be6834a16b4169ebac2602a35731c7e" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.879997 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.883445 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "457510b3-7c5a-456d-9df3-54fa7dee8c4b" (UID: "457510b3-7c5a-456d-9df3-54fa7dee8c4b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.886002 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "457510b3-7c5a-456d-9df3-54fa7dee8c4b" (UID: "457510b3-7c5a-456d-9df3-54fa7dee8c4b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.890817 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8d327288-f34e-4766-b3f6-b52b5c985d7d","Type":"ContainerStarted","Data":"25ca14ceea3124e9ce28f484389b454fe015ddd37e62df01b7fb16db5f838f83"} Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.902515 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-swx9t" event={"ID":"457510b3-7c5a-456d-9df3-54fa7dee8c4b","Type":"ContainerDied","Data":"9bb617f937270e1fe6e444469ff83627ed35fc24df5672358eff75f2893f7693"} Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.902587 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bb617f937270e1fe6e444469ff83627ed35fc24df5672358eff75f2893f7693" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.902693 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.956046 5010 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.956093 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk8xc\" (UniqueName: \"kubernetes.io/projected/457510b3-7c5a-456d-9df3-54fa7dee8c4b-kube-api-access-jk8xc\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.956108 5010 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.956118 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.961259 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-675cc696d4-7wvtv"] Feb 03 10:25:22 crc kubenswrapper[5010]: E0203 10:25:22.962033 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="457510b3-7c5a-456d-9df3-54fa7dee8c4b" containerName="keystone-bootstrap" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.962068 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="457510b3-7c5a-456d-9df3-54fa7dee8c4b" containerName="keystone-bootstrap" Feb 03 10:25:22 crc kubenswrapper[5010]: E0203 10:25:22.962097 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb55fd4-6f97-47c3-bd98-89ca6331cf88" containerName="init" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.962105 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb55fd4-6f97-47c3-bd98-89ca6331cf88" containerName="init" Feb 03 10:25:22 crc kubenswrapper[5010]: E0203 10:25:22.962119 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb55fd4-6f97-47c3-bd98-89ca6331cf88" containerName="dnsmasq-dns" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.962130 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb55fd4-6f97-47c3-bd98-89ca6331cf88" containerName="dnsmasq-dns" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.963873 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="457510b3-7c5a-456d-9df3-54fa7dee8c4b" containerName="keystone-bootstrap" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.963924 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb55fd4-6f97-47c3-bd98-89ca6331cf88" containerName="dnsmasq-dns" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.964913 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.969270 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.981020 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.023297 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.023256949 podStartE2EDuration="6.023256949s" podCreationTimestamp="2026-02-03 10:25:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:22.927910261 +0000 UTC m=+1393.083886390" watchObservedRunningTime="2026-02-03 10:25:23.023256949 +0000 UTC m=+1393.179233078" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.039807 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-675cc696d4-7wvtv"] Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.045654 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "457510b3-7c5a-456d-9df3-54fa7dee8c4b" (UID: "457510b3-7c5a-456d-9df3-54fa7dee8c4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.058138 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-scripts\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.058562 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n95h6\" (UniqueName: \"kubernetes.io/projected/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-kube-api-access-n95h6\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.058813 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-internal-tls-certs\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.059005 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-public-tls-certs\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.059162 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-combined-ca-bundle\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.059333 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-fernet-keys\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.059448 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-credential-keys\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.059514 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-config-data\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.059774 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.096611 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-config-data" (OuterVolumeSpecName: "config-data") pod "457510b3-7c5a-456d-9df3-54fa7dee8c4b" (UID: "457510b3-7c5a-456d-9df3-54fa7dee8c4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.128313 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cc988db4-2mpfb" podUID="2fedcc57-b16c-4177-a10e-f627269b4adb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.162403 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n95h6\" (UniqueName: \"kubernetes.io/projected/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-kube-api-access-n95h6\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.162523 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-internal-tls-certs\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.162556 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-public-tls-certs\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.162607 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-combined-ca-bundle\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.162646 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-fernet-keys\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.162676 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-credential-keys\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.162706 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-config-data\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.162756 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-scripts\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.162814 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.172522 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-combined-ca-bundle\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.173767 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-scripts\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.178321 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-fernet-keys\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.200243 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-credential-keys\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.201087 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-public-tls-certs\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.204178 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-config-data\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.209173 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-internal-tls-certs\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.231131 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n95h6\" (UniqueName: \"kubernetes.io/projected/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-kube-api-access-n95h6\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.337334 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-tpx4x"] Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.339535 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.348868 5010 scope.go:117] "RemoveContainer" containerID="9870cb3be829d265aa30927c41a48cc7802f5d65aec23cea9f8bcd10b02b6b19" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.359259 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-tpx4x"] Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.800269 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-bc6c5cf68-f9b4p"] Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.803192 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.883129 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bc6c5cf68-f9b4p"] Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.922938 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-logs\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.923085 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-898jq\" (UniqueName: \"kubernetes.io/projected/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-kube-api-access-898jq\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.923140 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-public-tls-certs\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.923248 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-internal-tls-certs\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.923329 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-combined-ca-bundle\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.923485 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-scripts\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.923761 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-config-data\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.006256 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f744c8944-2zwzr" event={"ID":"8d6356a1-c07c-4d04-8d48-7f13a822ddf5","Type":"ContainerStarted","Data":"0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d"} Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.006843 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.006995 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.030732 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-scripts\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.030924 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-config-data\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.031087 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-logs\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.031144 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-898jq\" (UniqueName: \"kubernetes.io/projected/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-kube-api-access-898jq\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.031181 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-public-tls-certs\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.031245 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-internal-tls-certs\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.031288 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-combined-ca-bundle\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.032332 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-logs\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.040999 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3ef87127-760d-4f81-8a78-a06d074c7ec3","Type":"ContainerStarted","Data":"9b0678012ddc709164e9aead0d03359efde01194b4a43605e01e402b58fd05e9"} Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.046920 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g6tdx" event={"ID":"bad34e68-b20a-486c-b06b-e19f5aaaf917","Type":"ContainerStarted","Data":"56c4bc07b47d992164c95f2c4bc219b10e3ec8444d085ea923e9fc23515c64b1"} Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.070449 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-config-data\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.071322 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-combined-ca-bundle\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.072669 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-scripts\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.077793 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-internal-tls-certs\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.085688 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-public-tls-certs\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.089536 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7f744c8944-2zwzr" podStartSLOduration=5.089493756 podStartE2EDuration="5.089493756s" podCreationTimestamp="2026-02-03 10:25:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:24.049237268 +0000 UTC m=+1394.205213397" watchObservedRunningTime="2026-02-03 10:25:24.089493756 +0000 UTC m=+1394.245469885" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.090133 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-898jq\" (UniqueName: \"kubernetes.io/projected/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-kube-api-access-898jq\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.101658 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-675cc696d4-7wvtv"] Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.150978 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.15094334 podStartE2EDuration="7.15094334s" podCreationTimestamp="2026-02-03 10:25:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:24.102674316 +0000 UTC m=+1394.258650445" watchObservedRunningTime="2026-02-03 10:25:24.15094334 +0000 UTC m=+1394.306919469" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.165953 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.212682 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-g6tdx" podStartSLOduration=6.911040575 podStartE2EDuration="52.212651171s" podCreationTimestamp="2026-02-03 10:24:32 +0000 UTC" firstStartedPulling="2026-02-03 10:24:36.749046051 +0000 UTC m=+1346.905022180" lastFinishedPulling="2026-02-03 10:25:22.050656647 +0000 UTC m=+1392.206632776" observedRunningTime="2026-02-03 10:25:24.135490882 +0000 UTC m=+1394.291467011" watchObservedRunningTime="2026-02-03 10:25:24.212651171 +0000 UTC m=+1394.368627300" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.593432 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eb55fd4-6f97-47c3-bd98-89ca6331cf88" path="/var/lib/kubelet/pods/9eb55fd4-6f97-47c3-bd98-89ca6331cf88/volumes" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.923233 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bc6c5cf68-f9b4p"] Feb 03 10:25:25 crc kubenswrapper[5010]: I0203 10:25:25.061441 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-675cc696d4-7wvtv" event={"ID":"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4","Type":"ContainerStarted","Data":"d6dac3e484a005977351cb033c83c44ebc6eb341c4e0affdfc49420dab5add60"} Feb 03 10:25:25 crc kubenswrapper[5010]: I0203 10:25:25.061527 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-675cc696d4-7wvtv" event={"ID":"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4","Type":"ContainerStarted","Data":"d12ca4ec55cc75e892ab98ddfbd2ac34d23b60e39019acf45130c87cd0b772e5"} Feb 03 10:25:25 crc kubenswrapper[5010]: I0203 10:25:25.062258 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:25 crc kubenswrapper[5010]: I0203 10:25:25.112251 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-675cc696d4-7wvtv" podStartSLOduration=3.112177091 podStartE2EDuration="3.112177091s" podCreationTimestamp="2026-02-03 10:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:25.103158468 +0000 UTC m=+1395.259134597" watchObservedRunningTime="2026-02-03 10:25:25.112177091 +0000 UTC m=+1395.268153230" Feb 03 10:25:26 crc kubenswrapper[5010]: I0203 10:25:26.085634 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b9wwp" event={"ID":"1acc33e7-f3ae-4131-a003-aa6b592269c6","Type":"ContainerStarted","Data":"90f279a47e6694b954d6224d0a36d83bb292142a861407bbd952b7ac0f3f1940"} Feb 03 10:25:26 crc kubenswrapper[5010]: I0203 10:25:26.115790 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-b9wwp" podStartSLOduration=7.147739497 podStartE2EDuration="54.115764252s" podCreationTimestamp="2026-02-03 10:24:32 +0000 UTC" firstStartedPulling="2026-02-03 10:24:37.165572258 +0000 UTC m=+1347.321548387" lastFinishedPulling="2026-02-03 10:25:24.133597013 +0000 UTC m=+1394.289573142" observedRunningTime="2026-02-03 10:25:26.114487919 +0000 UTC m=+1396.270464048" watchObservedRunningTime="2026-02-03 10:25:26.115764252 +0000 UTC m=+1396.271740381" Feb 03 10:25:26 crc kubenswrapper[5010]: I0203 10:25:26.787707 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" podUID="9eb55fd4-6f97-47c3-bd98-89ca6331cf88" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: i/o timeout" Feb 03 10:25:27 crc kubenswrapper[5010]: W0203 10:25:27.873899 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ecd94c1_1faa_4acd_aa24_dd54388d2d99.slice/crio-18568e750adc664c2b522c22bba83c2766ecf2703b1e46b06ebeeaeaf7db2912 WatchSource:0}: Error finding container 18568e750adc664c2b522c22bba83c2766ecf2703b1e46b06ebeeaeaf7db2912: Status 404 returned error can't find the container with id 18568e750adc664c2b522c22bba83c2766ecf2703b1e46b06ebeeaeaf7db2912 Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.066626 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.066697 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.123410 5010 generic.go:334] "Generic (PLEG): container finished" podID="bad34e68-b20a-486c-b06b-e19f5aaaf917" containerID="56c4bc07b47d992164c95f2c4bc219b10e3ec8444d085ea923e9fc23515c64b1" exitCode=0 Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.123541 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g6tdx" event={"ID":"bad34e68-b20a-486c-b06b-e19f5aaaf917","Type":"ContainerDied","Data":"56c4bc07b47d992164c95f2c4bc219b10e3ec8444d085ea923e9fc23515c64b1"} Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.128590 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bc6c5cf68-f9b4p" event={"ID":"3ecd94c1-1faa-4acd-aa24-dd54388d2d99","Type":"ContainerStarted","Data":"18568e750adc664c2b522c22bba83c2766ecf2703b1e46b06ebeeaeaf7db2912"} Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.134168 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.135001 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.139230 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.519267 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.519870 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.625802 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.702256 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.892444 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zcvn8"] Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.895950 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.916353 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zcvn8"] Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.995030 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xt6g\" (UniqueName: \"kubernetes.io/projected/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-kube-api-access-7xt6g\") pod \"certified-operators-zcvn8\" (UID: \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\") " pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.995187 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-catalog-content\") pod \"certified-operators-zcvn8\" (UID: \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\") " pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.995508 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-utilities\") pod \"certified-operators-zcvn8\" (UID: \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\") " pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.098323 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xt6g\" (UniqueName: \"kubernetes.io/projected/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-kube-api-access-7xt6g\") pod \"certified-operators-zcvn8\" (UID: \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\") " pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.098464 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-catalog-content\") pod \"certified-operators-zcvn8\" (UID: \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\") " pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.098649 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-utilities\") pod \"certified-operators-zcvn8\" (UID: \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\") " pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.099449 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-utilities\") pod \"certified-operators-zcvn8\" (UID: \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\") " pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.100379 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-catalog-content\") pod \"certified-operators-zcvn8\" (UID: \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\") " pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.145539 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xt6g\" (UniqueName: \"kubernetes.io/projected/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-kube-api-access-7xt6g\") pod \"certified-operators-zcvn8\" (UID: \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\") " pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.159355 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4338eb03-3ad6-4d68-8d8a-a37694aff6d7","Type":"ContainerStarted","Data":"66c74d715b2eacb41bf0f0e39922576ad416b3eb1d6ad6955ec6036858cd2f1d"} Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.181841 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bc6c5cf68-f9b4p" event={"ID":"3ecd94c1-1faa-4acd-aa24-dd54388d2d99","Type":"ContainerStarted","Data":"da38cfd4d210ad528e6beb9b5e12f4d4bc0d000ce5c9371a1f32e78184a92b06"} Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.181935 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bc6c5cf68-f9b4p" event={"ID":"3ecd94c1-1faa-4acd-aa24-dd54388d2d99","Type":"ContainerStarted","Data":"aff1156efc4d495549c8c433efd558b598018579116ec1c91dc8694fdccf0411"} Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.182183 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.182851 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.182916 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.182936 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.182951 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.270876 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-bc6c5cf68-f9b4p" podStartSLOduration=6.270847299 podStartE2EDuration="6.270847299s" podCreationTimestamp="2026-02-03 10:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:29.228709982 +0000 UTC m=+1399.384686111" watchObservedRunningTime="2026-02-03 10:25:29.270847299 +0000 UTC m=+1399.426823428" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.286622 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:30 crc kubenswrapper[5010]: I0203 10:25:30.237524 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:25:30 crc kubenswrapper[5010]: I0203 10:25:30.936563 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.000518 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bad34e68-b20a-486c-b06b-e19f5aaaf917-db-sync-config-data\") pod \"bad34e68-b20a-486c-b06b-e19f5aaaf917\" (UID: \"bad34e68-b20a-486c-b06b-e19f5aaaf917\") " Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.000618 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l7tp\" (UniqueName: \"kubernetes.io/projected/bad34e68-b20a-486c-b06b-e19f5aaaf917-kube-api-access-6l7tp\") pod \"bad34e68-b20a-486c-b06b-e19f5aaaf917\" (UID: \"bad34e68-b20a-486c-b06b-e19f5aaaf917\") " Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.000651 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bad34e68-b20a-486c-b06b-e19f5aaaf917-combined-ca-bundle\") pod \"bad34e68-b20a-486c-b06b-e19f5aaaf917\" (UID: \"bad34e68-b20a-486c-b06b-e19f5aaaf917\") " Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.032525 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bad34e68-b20a-486c-b06b-e19f5aaaf917-kube-api-access-6l7tp" (OuterVolumeSpecName: "kube-api-access-6l7tp") pod "bad34e68-b20a-486c-b06b-e19f5aaaf917" (UID: "bad34e68-b20a-486c-b06b-e19f5aaaf917"). InnerVolumeSpecName "kube-api-access-6l7tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.053561 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bad34e68-b20a-486c-b06b-e19f5aaaf917-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bad34e68-b20a-486c-b06b-e19f5aaaf917" (UID: "bad34e68-b20a-486c-b06b-e19f5aaaf917"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.065656 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bad34e68-b20a-486c-b06b-e19f5aaaf917-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bad34e68-b20a-486c-b06b-e19f5aaaf917" (UID: "bad34e68-b20a-486c-b06b-e19f5aaaf917"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.105404 5010 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bad34e68-b20a-486c-b06b-e19f5aaaf917-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.105467 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6l7tp\" (UniqueName: \"kubernetes.io/projected/bad34e68-b20a-486c-b06b-e19f5aaaf917-kube-api-access-6l7tp\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.105483 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bad34e68-b20a-486c-b06b-e19f5aaaf917-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.203903 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zcvn8"] Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.286581 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zcvn8" event={"ID":"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb","Type":"ContainerStarted","Data":"e35e681b91c0a3ba4c5e23b8c2426b406cc51121c6807c30d998f313924cb39e"} Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.308034 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.308124 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g6tdx" event={"ID":"bad34e68-b20a-486c-b06b-e19f5aaaf917","Type":"ContainerDied","Data":"a9d5da882cdcbed71ee51c06f06cb45291d0d12cebefa2201b69150f2363476e"} Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.308169 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9d5da882cdcbed71ee51c06f06cb45291d0d12cebefa2201b69150f2363476e" Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.308751 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.308973 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.310340 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.310356 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:25:31 crc kubenswrapper[5010]: E0203 10:25:31.813132 5010 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbad34e68_b20a_486c_b06b_e19f5aaaf917.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbad34e68_b20a_486c_b06b_e19f5aaaf917.slice/crio-a9d5da882cdcbed71ee51c06f06cb45291d0d12cebefa2201b69150f2363476e\": RecentStats: unable to find data in memory cache]" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.357196 5010 generic.go:334] "Generic (PLEG): container finished" podID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerID="fe0ab3a7555528e34ba8c05e18f87523a24b1e0ac976b994fc2479b4a244d8aa" exitCode=0 Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.357826 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zcvn8" event={"ID":"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb","Type":"ContainerDied","Data":"fe0ab3a7555528e34ba8c05e18f87523a24b1e0ac976b994fc2479b4a244d8aa"} Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.387538 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6bdd746887-zr9j6"] Feb 03 10:25:32 crc kubenswrapper[5010]: E0203 10:25:32.392692 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad34e68-b20a-486c-b06b-e19f5aaaf917" containerName="barbican-db-sync" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.392743 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad34e68-b20a-486c-b06b-e19f5aaaf917" containerName="barbican-db-sync" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.393384 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="bad34e68-b20a-486c-b06b-e19f5aaaf917" containerName="barbican-db-sync" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.395117 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.411913 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.412206 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-j94mw" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.420614 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.427732 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-85855ff49d-76x8k"] Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.430647 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.435792 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.446334 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6bdd746887-zr9j6"] Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.490415 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-85855ff49d-76x8k"] Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.496416 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-config-data-custom\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.496875 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-logs\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.497040 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-combined-ca-bundle\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.497112 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-config-data\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.497180 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfvtk\" (UniqueName: \"kubernetes.io/projected/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-kube-api-access-bfvtk\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.582407 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-cxfv2"] Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.584588 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.609736 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttwxh\" (UniqueName: \"kubernetes.io/projected/f377630f-64f3-4fd9-8449-53d739d775c2-kube-api-access-ttwxh\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.609866 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-config-data-custom\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.610025 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f377630f-64f3-4fd9-8449-53d739d775c2-config-data\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.610112 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-logs\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.610186 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f377630f-64f3-4fd9-8449-53d739d775c2-config-data-custom\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.610272 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-combined-ca-bundle\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.610310 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f377630f-64f3-4fd9-8449-53d739d775c2-combined-ca-bundle\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.610346 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-config-data\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.610370 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f377630f-64f3-4fd9-8449-53d739d775c2-logs\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.610407 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfvtk\" (UniqueName: \"kubernetes.io/projected/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-kube-api-access-bfvtk\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.704383 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-logs\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.714056 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f377630f-64f3-4fd9-8449-53d739d775c2-config-data-custom\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.714160 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f377630f-64f3-4fd9-8449-53d739d775c2-combined-ca-bundle\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.714188 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f377630f-64f3-4fd9-8449-53d739d775c2-logs\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.714268 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tngbc\" (UniqueName: \"kubernetes.io/projected/73d76595-42a6-4756-a5c5-7135fe150f1e-kube-api-access-tngbc\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.714307 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.714359 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.714377 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-dns-svc\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.714400 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttwxh\" (UniqueName: \"kubernetes.io/projected/f377630f-64f3-4fd9-8449-53d739d775c2-kube-api-access-ttwxh\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.714421 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-config\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.714502 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.714525 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f377630f-64f3-4fd9-8449-53d739d775c2-config-data\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.718922 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-cxfv2"] Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.734422 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-config-data\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.740319 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f377630f-64f3-4fd9-8449-53d739d775c2-logs\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.743874 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f377630f-64f3-4fd9-8449-53d739d775c2-config-data-custom\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.755632 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f377630f-64f3-4fd9-8449-53d739d775c2-config-data\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.758002 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-combined-ca-bundle\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.762160 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfvtk\" (UniqueName: \"kubernetes.io/projected/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-kube-api-access-bfvtk\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.762746 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-config-data-custom\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.767927 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttwxh\" (UniqueName: \"kubernetes.io/projected/f377630f-64f3-4fd9-8449-53d739d775c2-kube-api-access-ttwxh\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.772006 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f377630f-64f3-4fd9-8449-53d739d775c2-combined-ca-bundle\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.795270 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:32.806317 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cdcd56868-k9h7g" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:32.819877 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:32.819968 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-dns-svc\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:32.820002 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-config\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:32.820101 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:32.820206 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tngbc\" (UniqueName: \"kubernetes.io/projected/73d76595-42a6-4756-a5c5-7135fe150f1e-kube-api-access-tngbc\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:32.820294 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.129873 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.138437 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.139433 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cc988db4-2mpfb" podUID="2fedcc57-b16c-4177-a10e-f627269b4adb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.143393 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-dns-svc\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.143826 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.144066 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-config\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.147756 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.193411 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tngbc\" (UniqueName: \"kubernetes.io/projected/73d76595-42a6-4756-a5c5-7135fe150f1e-kube-api-access-tngbc\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.206482 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-595698fff8-qzxdr"] Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.211489 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.216236 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.325806 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34b3477b-06e6-4914-a048-54af2ebc0250-logs\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.325878 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sz82\" (UniqueName: \"kubernetes.io/projected/34b3477b-06e6-4914-a048-54af2ebc0250-kube-api-access-8sz82\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.326071 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-config-data\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.326183 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-combined-ca-bundle\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.326364 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-config-data-custom\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.374189 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-595698fff8-qzxdr"] Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.430949 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34b3477b-06e6-4914-a048-54af2ebc0250-logs\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.431008 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sz82\" (UniqueName: \"kubernetes.io/projected/34b3477b-06e6-4914-a048-54af2ebc0250-kube-api-access-8sz82\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.431089 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-config-data\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.431172 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-combined-ca-bundle\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.434585 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-config-data-custom\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.437500 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34b3477b-06e6-4914-a048-54af2ebc0250-logs\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.439389 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-config-data-custom\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.451767 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-config-data\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.459366 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-combined-ca-bundle\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.486553 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sz82\" (UniqueName: \"kubernetes.io/projected/34b3477b-06e6-4914-a048-54af2ebc0250-kube-api-access-8sz82\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.491886 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.790114 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:34 crc kubenswrapper[5010]: I0203 10:25:34.489129 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6bdd746887-zr9j6"] Feb 03 10:25:34 crc kubenswrapper[5010]: I0203 10:25:34.834003 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-595698fff8-qzxdr"] Feb 03 10:25:34 crc kubenswrapper[5010]: I0203 10:25:34.865257 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-85855ff49d-76x8k"] Feb 03 10:25:34 crc kubenswrapper[5010]: I0203 10:25:34.942170 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-cxfv2"] Feb 03 10:25:35 crc kubenswrapper[5010]: I0203 10:25:35.532764 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" event={"ID":"f377630f-64f3-4fd9-8449-53d739d775c2","Type":"ContainerStarted","Data":"3ad54f8c0bff3944cbe9d84e2b81608a6422ca7d9fbcefab4a5dad88134db118"} Feb 03 10:25:35 crc kubenswrapper[5010]: I0203 10:25:35.538553 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6bdd746887-zr9j6" event={"ID":"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7","Type":"ContainerStarted","Data":"e7c5b8603827c99eb651153c65ffaba2307d6463e666112bb27572afc0a364ba"} Feb 03 10:25:35 crc kubenswrapper[5010]: I0203 10:25:35.545990 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zcvn8" event={"ID":"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb","Type":"ContainerStarted","Data":"74673c9131b0207ab10afaa2abb5a53e1aa2d49409325c6d66e87e77d3e886a6"} Feb 03 10:25:35 crc kubenswrapper[5010]: I0203 10:25:35.556439 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-595698fff8-qzxdr" event={"ID":"34b3477b-06e6-4914-a048-54af2ebc0250","Type":"ContainerStarted","Data":"276b5ede8be32b2fcd5e4dea2a354a0412bc1e3d512cddd2da2cb8731f6a5abd"} Feb 03 10:25:35 crc kubenswrapper[5010]: I0203 10:25:35.573289 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" event={"ID":"73d76595-42a6-4756-a5c5-7135fe150f1e","Type":"ContainerStarted","Data":"551880a184d3cea9debb67a96e769d028b7329cfb831b90c16d9edf472195a6b"} Feb 03 10:25:35 crc kubenswrapper[5010]: I0203 10:25:35.854528 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:35 crc kubenswrapper[5010]: I0203 10:25:35.854740 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:25:36 crc kubenswrapper[5010]: I0203 10:25:36.499379 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 03 10:25:36 crc kubenswrapper[5010]: I0203 10:25:36.500163 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:25:36 crc kubenswrapper[5010]: I0203 10:25:36.501514 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 03 10:25:36 crc kubenswrapper[5010]: I0203 10:25:36.644988 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:36 crc kubenswrapper[5010]: I0203 10:25:36.833533 5010 generic.go:334] "Generic (PLEG): container finished" podID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerID="74673c9131b0207ab10afaa2abb5a53e1aa2d49409325c6d66e87e77d3e886a6" exitCode=0 Feb 03 10:25:36 crc kubenswrapper[5010]: I0203 10:25:36.833660 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zcvn8" event={"ID":"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb","Type":"ContainerDied","Data":"74673c9131b0207ab10afaa2abb5a53e1aa2d49409325c6d66e87e77d3e886a6"} Feb 03 10:25:36 crc kubenswrapper[5010]: I0203 10:25:36.840709 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-595698fff8-qzxdr" event={"ID":"34b3477b-06e6-4914-a048-54af2ebc0250","Type":"ContainerStarted","Data":"a2e083c61dc7c9a5c3fac49824f7953d3fb85c8844f8a1f4ef14207348bfa1d9"} Feb 03 10:25:36 crc kubenswrapper[5010]: I0203 10:25:36.840777 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-595698fff8-qzxdr" event={"ID":"34b3477b-06e6-4914-a048-54af2ebc0250","Type":"ContainerStarted","Data":"e6b14e112fe4e444557f7a3aff312b5084d7db0d95368f7bd4f747a1a68cca9e"} Feb 03 10:25:36 crc kubenswrapper[5010]: I0203 10:25:36.843091 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:36 crc kubenswrapper[5010]: I0203 10:25:36.843154 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:36 crc kubenswrapper[5010]: I0203 10:25:36.864443 5010 generic.go:334] "Generic (PLEG): container finished" podID="73d76595-42a6-4756-a5c5-7135fe150f1e" containerID="2c19193a99dd2b89cf342b5374e508ef59ea58fbd9c5b83248ac4024b880fe95" exitCode=0 Feb 03 10:25:36 crc kubenswrapper[5010]: I0203 10:25:36.865455 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" event={"ID":"73d76595-42a6-4756-a5c5-7135fe150f1e","Type":"ContainerDied","Data":"2c19193a99dd2b89cf342b5374e508ef59ea58fbd9c5b83248ac4024b880fe95"} Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.029700 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-595698fff8-qzxdr" podStartSLOduration=4.029642674 podStartE2EDuration="4.029642674s" podCreationTimestamp="2026-02-03 10:25:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:36.947725913 +0000 UTC m=+1407.103702042" watchObservedRunningTime="2026-02-03 10:25:37.029642674 +0000 UTC m=+1407.185618813" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.498111 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6f67746f54-2l6b9"] Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.501393 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.507029 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.507353 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.521634 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6f67746f54-2l6b9"] Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.552523 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-config-data\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.552609 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-public-tls-certs\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.552770 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-config-data-custom\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.552799 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-combined-ca-bundle\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.552825 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-logs\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.552904 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2h2r\" (UniqueName: \"kubernetes.io/projected/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-kube-api-access-k2h2r\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.552959 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-internal-tls-certs\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.655288 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-config-data\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.655391 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-public-tls-certs\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.655523 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-config-data-custom\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.655552 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-combined-ca-bundle\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.655583 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-logs\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.655701 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2h2r\" (UniqueName: \"kubernetes.io/projected/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-kube-api-access-k2h2r\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.655751 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-internal-tls-certs\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.662799 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-logs\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.667975 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-config-data\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.668015 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-internal-tls-certs\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.681262 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-config-data-custom\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.682471 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-combined-ca-bundle\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.684602 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-public-tls-certs\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.691951 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2h2r\" (UniqueName: \"kubernetes.io/projected/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-kube-api-access-k2h2r\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.835982 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.888179 5010 generic.go:334] "Generic (PLEG): container finished" podID="1acc33e7-f3ae-4131-a003-aa6b592269c6" containerID="90f279a47e6694b954d6224d0a36d83bb292142a861407bbd952b7ac0f3f1940" exitCode=0 Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.888347 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b9wwp" event={"ID":"1acc33e7-f3ae-4131-a003-aa6b592269c6","Type":"ContainerDied","Data":"90f279a47e6694b954d6224d0a36d83bb292142a861407bbd952b7ac0f3f1940"} Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.900328 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" event={"ID":"73d76595-42a6-4756-a5c5-7135fe150f1e","Type":"ContainerStarted","Data":"bcf09a9582e13a71a798c91df881d34f9629fd8355c0382e4f0464933e875d83"} Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.900392 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.950628 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" podStartSLOduration=5.950601296 podStartE2EDuration="5.950601296s" podCreationTimestamp="2026-02-03 10:25:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:37.936170354 +0000 UTC m=+1408.092146483" watchObservedRunningTime="2026-02-03 10:25:37.950601296 +0000 UTC m=+1408.106577425" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.437150 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.505131 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-scripts\") pod \"1acc33e7-f3ae-4131-a003-aa6b592269c6\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.505361 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f846k\" (UniqueName: \"kubernetes.io/projected/1acc33e7-f3ae-4131-a003-aa6b592269c6-kube-api-access-f846k\") pod \"1acc33e7-f3ae-4131-a003-aa6b592269c6\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.505497 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-combined-ca-bundle\") pod \"1acc33e7-f3ae-4131-a003-aa6b592269c6\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.506631 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-db-sync-config-data\") pod \"1acc33e7-f3ae-4131-a003-aa6b592269c6\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.506657 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1acc33e7-f3ae-4131-a003-aa6b592269c6-etc-machine-id\") pod \"1acc33e7-f3ae-4131-a003-aa6b592269c6\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.506684 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-config-data\") pod \"1acc33e7-f3ae-4131-a003-aa6b592269c6\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.506836 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1acc33e7-f3ae-4131-a003-aa6b592269c6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1acc33e7-f3ae-4131-a003-aa6b592269c6" (UID: "1acc33e7-f3ae-4131-a003-aa6b592269c6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.507313 5010 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1acc33e7-f3ae-4131-a003-aa6b592269c6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.517180 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1acc33e7-f3ae-4131-a003-aa6b592269c6-kube-api-access-f846k" (OuterVolumeSpecName: "kube-api-access-f846k") pod "1acc33e7-f3ae-4131-a003-aa6b592269c6" (UID: "1acc33e7-f3ae-4131-a003-aa6b592269c6"). InnerVolumeSpecName "kube-api-access-f846k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.519746 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-scripts" (OuterVolumeSpecName: "scripts") pod "1acc33e7-f3ae-4131-a003-aa6b592269c6" (UID: "1acc33e7-f3ae-4131-a003-aa6b592269c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.523667 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1acc33e7-f3ae-4131-a003-aa6b592269c6" (UID: "1acc33e7-f3ae-4131-a003-aa6b592269c6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.561724 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1acc33e7-f3ae-4131-a003-aa6b592269c6" (UID: "1acc33e7-f3ae-4131-a003-aa6b592269c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.610247 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.611049 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f846k\" (UniqueName: \"kubernetes.io/projected/1acc33e7-f3ae-4131-a003-aa6b592269c6-kube-api-access-f846k\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.611141 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.611339 5010 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.610332 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-config-data" (OuterVolumeSpecName: "config-data") pod "1acc33e7-f3ae-4131-a003-aa6b592269c6" (UID: "1acc33e7-f3ae-4131-a003-aa6b592269c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.663032 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6f67746f54-2l6b9"] Feb 03 10:25:39 crc kubenswrapper[5010]: W0203 10:25:39.677724 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bab826b_af5f_4bd1_a68a_0bdda5f89d80.slice/crio-afff48a1e1ecd4c286603cd076555cc140e616a151373500817c0a06f61bf018 WatchSource:0}: Error finding container afff48a1e1ecd4c286603cd076555cc140e616a151373500817c0a06f61bf018: Status 404 returned error can't find the container with id afff48a1e1ecd4c286603cd076555cc140e616a151373500817c0a06f61bf018 Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.714104 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.940252 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" event={"ID":"f377630f-64f3-4fd9-8449-53d739d775c2","Type":"ContainerStarted","Data":"079eb74ecfdda51918da6c05552d51a853d958a4a620100baab1538f28f5e1a5"} Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.940310 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" event={"ID":"f377630f-64f3-4fd9-8449-53d739d775c2","Type":"ContainerStarted","Data":"9c4c6374d0b4cf420352015671ce87dd26cc2f7b9e3ef6b958122f72004ad8f7"} Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.968458 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f67746f54-2l6b9" event={"ID":"3bab826b-af5f-4bd1-a68a-0bdda5f89d80","Type":"ContainerStarted","Data":"74ccd033c9e2884f72e2c4c1b6c4e0e23117a22a85159de4838754ce36874bb7"} Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.968539 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f67746f54-2l6b9" event={"ID":"3bab826b-af5f-4bd1-a68a-0bdda5f89d80","Type":"ContainerStarted","Data":"afff48a1e1ecd4c286603cd076555cc140e616a151373500817c0a06f61bf018"} Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.988137 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6bdd746887-zr9j6" event={"ID":"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7","Type":"ContainerStarted","Data":"1766ead65b13e47af68980b44ad86d632e4554f234b2ac1717f8ed7db11a09c1"} Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.988198 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6bdd746887-zr9j6" event={"ID":"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7","Type":"ContainerStarted","Data":"ec158f66c9b3c707bfeee50c71073878189b9fd5415bd191cd57d56e768c8590"} Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.002747 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zcvn8" event={"ID":"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb","Type":"ContainerStarted","Data":"8340acedc9cfb7958b5ed0fad5a8c1555a0dabbb9f7998f97b867b7a3dd1d05e"} Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.010606 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b9wwp" event={"ID":"1acc33e7-f3ae-4131-a003-aa6b592269c6","Type":"ContainerDied","Data":"dcbb37a8fd2f82ef82d966d8287692e503ed1134f141d666defaaf1447e6aa0a"} Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.010926 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcbb37a8fd2f82ef82d966d8287692e503ed1134f141d666defaaf1447e6aa0a" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.011142 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.108207 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" podStartSLOduration=4.082653492 podStartE2EDuration="8.108167677s" podCreationTimestamp="2026-02-03 10:25:32 +0000 UTC" firstStartedPulling="2026-02-03 10:25:35.004523029 +0000 UTC m=+1405.160499158" lastFinishedPulling="2026-02-03 10:25:39.030037214 +0000 UTC m=+1409.186013343" observedRunningTime="2026-02-03 10:25:39.985723991 +0000 UTC m=+1410.141700120" watchObservedRunningTime="2026-02-03 10:25:40.108167677 +0000 UTC m=+1410.264143806" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.139461 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6bdd746887-zr9j6" podStartSLOduration=3.687187487 podStartE2EDuration="8.139422283s" podCreationTimestamp="2026-02-03 10:25:32 +0000 UTC" firstStartedPulling="2026-02-03 10:25:34.577699575 +0000 UTC m=+1404.733675704" lastFinishedPulling="2026-02-03 10:25:39.029934371 +0000 UTC m=+1409.185910500" observedRunningTime="2026-02-03 10:25:40.021192285 +0000 UTC m=+1410.177168414" watchObservedRunningTime="2026-02-03 10:25:40.139422283 +0000 UTC m=+1410.295398412" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.220798 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zcvn8" podStartSLOduration=5.56384366 podStartE2EDuration="12.22075683s" podCreationTimestamp="2026-02-03 10:25:28 +0000 UTC" firstStartedPulling="2026-02-03 10:25:32.371951993 +0000 UTC m=+1402.527928122" lastFinishedPulling="2026-02-03 10:25:39.028865163 +0000 UTC m=+1409.184841292" observedRunningTime="2026-02-03 10:25:40.0597629 +0000 UTC m=+1410.215739029" watchObservedRunningTime="2026-02-03 10:25:40.22075683 +0000 UTC m=+1410.376732959" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.369983 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 10:25:40 crc kubenswrapper[5010]: E0203 10:25:40.371169 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1acc33e7-f3ae-4131-a003-aa6b592269c6" containerName="cinder-db-sync" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.371191 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="1acc33e7-f3ae-4131-a003-aa6b592269c6" containerName="cinder-db-sync" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.371476 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="1acc33e7-f3ae-4131-a003-aa6b592269c6" containerName="cinder-db-sync" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.416943 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.428532 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.428941 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.431238 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.441443 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-gk5q6" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.443115 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.455892 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2608e076-ccd5-4d9b-9739-d2815655090e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.456362 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrcvl\" (UniqueName: \"kubernetes.io/projected/2608e076-ccd5-4d9b-9739-d2815655090e-kube-api-access-jrcvl\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.456472 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.460726 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-scripts\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.460896 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-config-data\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.461395 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.550966 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-cxfv2"] Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.551336 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" podUID="73d76595-42a6-4756-a5c5-7135fe150f1e" containerName="dnsmasq-dns" containerID="cri-o://bcf09a9582e13a71a798c91df881d34f9629fd8355c0382e4f0464933e875d83" gracePeriod=10 Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.564208 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.564281 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2608e076-ccd5-4d9b-9739-d2815655090e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.564350 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrcvl\" (UniqueName: \"kubernetes.io/projected/2608e076-ccd5-4d9b-9739-d2815655090e-kube-api-access-jrcvl\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.564370 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.564398 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-scripts\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.564420 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-config-data\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.575825 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-config-data\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.592015 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2608e076-ccd5-4d9b-9739-d2815655090e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.592240 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.595859 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-scripts\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.611242 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.627375 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrcvl\" (UniqueName: \"kubernetes.io/projected/2608e076-ccd5-4d9b-9739-d2815655090e-kube-api-access-jrcvl\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.645364 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6vbfz"] Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.648267 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.659838 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.662363 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.667169 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.675413 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6vbfz"] Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.692276 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.787342 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/872497ad-02bf-48fd-9ef7-c39591cd0cf3-logs\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.787481 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8w9d\" (UniqueName: \"kubernetes.io/projected/b88c8b02-54df-4761-acc8-c959005f4444-kube-api-access-d8w9d\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.787511 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-scripts\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.787540 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.787669 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-config\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.787868 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.788040 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.788079 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-config-data-custom\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.788249 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.788336 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/872497ad-02bf-48fd-9ef7-c39591cd0cf3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.788623 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.788663 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvk2j\" (UniqueName: \"kubernetes.io/projected/872497ad-02bf-48fd-9ef7-c39591cd0cf3-kube-api-access-kvk2j\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.788727 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-config-data\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.821276 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.892201 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.892995 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.893047 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-config-data-custom\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.893143 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.893476 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/872497ad-02bf-48fd-9ef7-c39591cd0cf3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.895927 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.897526 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.893206 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/872497ad-02bf-48fd-9ef7-c39591cd0cf3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.898003 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.898070 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvk2j\" (UniqueName: \"kubernetes.io/projected/872497ad-02bf-48fd-9ef7-c39591cd0cf3-kube-api-access-kvk2j\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.898157 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-config-data\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.898251 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/872497ad-02bf-48fd-9ef7-c39591cd0cf3-logs\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.898536 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8w9d\" (UniqueName: \"kubernetes.io/projected/b88c8b02-54df-4761-acc8-c959005f4444-kube-api-access-d8w9d\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.898591 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-scripts\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.898651 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.898776 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-config\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.900502 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-config\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.901163 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/872497ad-02bf-48fd-9ef7-c39591cd0cf3-logs\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.902229 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.922761 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.928462 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-scripts\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.936579 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-config-data-custom\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.939730 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvk2j\" (UniqueName: \"kubernetes.io/projected/872497ad-02bf-48fd-9ef7-c39591cd0cf3-kube-api-access-kvk2j\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.941189 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.944709 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-config-data\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.977805 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8w9d\" (UniqueName: \"kubernetes.io/projected/b88c8b02-54df-4761-acc8-c959005f4444-kube-api-access-d8w9d\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.021534 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.137881 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.148912 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f67746f54-2l6b9" event={"ID":"3bab826b-af5f-4bd1-a68a-0bdda5f89d80","Type":"ContainerStarted","Data":"f91c84248ede11ce656d67a63993f3673baa475b80485b6b3e89ecf47a959661"} Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.162741 5010 generic.go:334] "Generic (PLEG): container finished" podID="73d76595-42a6-4756-a5c5-7135fe150f1e" containerID="bcf09a9582e13a71a798c91df881d34f9629fd8355c0382e4f0464933e875d83" exitCode=0 Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.162947 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" event={"ID":"73d76595-42a6-4756-a5c5-7135fe150f1e","Type":"ContainerDied","Data":"bcf09a9582e13a71a798c91df881d34f9629fd8355c0382e4f0464933e875d83"} Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.171872 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.189624 5010 generic.go:334] "Generic (PLEG): container finished" podID="716318b2-6f04-4ff9-94c2-e107ebf51cb6" containerID="1e0c0b172a23175ded34e25aee553cea1577eb12ecd614b67b01f55633483ef4" exitCode=137 Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.189676 5010 generic.go:334] "Generic (PLEG): container finished" podID="716318b2-6f04-4ff9-94c2-e107ebf51cb6" containerID="5ec57a7e44cc0f82c124057f7268cf9e4686f96d4ca8ba657715ac39cccda8e4" exitCode=137 Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.191121 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b5b4c5ff-x859r" event={"ID":"716318b2-6f04-4ff9-94c2-e107ebf51cb6","Type":"ContainerDied","Data":"1e0c0b172a23175ded34e25aee553cea1577eb12ecd614b67b01f55633483ef4"} Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.191167 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b5b4c5ff-x859r" event={"ID":"716318b2-6f04-4ff9-94c2-e107ebf51cb6","Type":"ContainerDied","Data":"5ec57a7e44cc0f82c124057f7268cf9e4686f96d4ca8ba657715ac39cccda8e4"} Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.204940 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6f67746f54-2l6b9" podStartSLOduration=4.20490728 podStartE2EDuration="4.20490728s" podCreationTimestamp="2026-02-03 10:25:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:41.186255019 +0000 UTC m=+1411.342231158" watchObservedRunningTime="2026-02-03 10:25:41.20490728 +0000 UTC m=+1411.360883419" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.637823 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-58c5b6f6cc-94dq7"] Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.638685 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-58c5b6f6cc-94dq7" podUID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" containerName="neutron-api" containerID="cri-o://f95d5f955943f1d6179b138d89e148c3a26347690a24c1fd2737b1cfd76d3955" gracePeriod=30 Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.640105 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-58c5b6f6cc-94dq7" podUID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" containerName="neutron-httpd" containerID="cri-o://e0894a68073b3bd07b800e9f0879ea84ca668a89746cac6928280bad0a28dded" gracePeriod=30 Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.696096 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-78c78c7889-r9575"] Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.704514 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.711846 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-78c78c7889-r9575"] Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.724316 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.854379 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-ovsdbserver-sb\") pod \"73d76595-42a6-4756-a5c5-7135fe150f1e\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.854475 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-ovsdbserver-nb\") pod \"73d76595-42a6-4756-a5c5-7135fe150f1e\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.854592 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-dns-svc\") pod \"73d76595-42a6-4756-a5c5-7135fe150f1e\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.854634 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-dns-swift-storage-0\") pod \"73d76595-42a6-4756-a5c5-7135fe150f1e\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.854841 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-config\") pod \"73d76595-42a6-4756-a5c5-7135fe150f1e\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.854924 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tngbc\" (UniqueName: \"kubernetes.io/projected/73d76595-42a6-4756-a5c5-7135fe150f1e-kube-api-access-tngbc\") pod \"73d76595-42a6-4756-a5c5-7135fe150f1e\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.864245 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncb9c\" (UniqueName: \"kubernetes.io/projected/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-kube-api-access-ncb9c\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.864584 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-httpd-config\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.864985 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-ovndb-tls-certs\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.865092 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-public-tls-certs\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.865294 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-combined-ca-bundle\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.865337 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-internal-tls-certs\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.866034 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-config\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.898650 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73d76595-42a6-4756-a5c5-7135fe150f1e-kube-api-access-tngbc" (OuterVolumeSpecName: "kube-api-access-tngbc") pod "73d76595-42a6-4756-a5c5-7135fe150f1e" (UID: "73d76595-42a6-4756-a5c5-7135fe150f1e"). InnerVolumeSpecName "kube-api-access-tngbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.981461 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncb9c\" (UniqueName: \"kubernetes.io/projected/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-kube-api-access-ncb9c\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.982706 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-httpd-config\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.983287 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-ovndb-tls-certs\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.983400 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-public-tls-certs\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.983615 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-combined-ca-bundle\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.983662 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-internal-tls-certs\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.983846 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-config\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.984178 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tngbc\" (UniqueName: \"kubernetes.io/projected/73d76595-42a6-4756-a5c5-7135fe150f1e-kube-api-access-tngbc\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.057881 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-ovndb-tls-certs\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.058880 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-combined-ca-bundle\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.069306 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-httpd-config\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.069938 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-public-tls-certs\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.070409 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-58c5b6f6cc-94dq7" podUID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.150:9696/\": read tcp 10.217.0.2:38112->10.217.0.150:9696: read: connection reset by peer" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.071797 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncb9c\" (UniqueName: \"kubernetes.io/projected/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-kube-api-access-ncb9c\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.076075 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-config\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.112288 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-internal-tls-certs\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.198572 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "73d76595-42a6-4756-a5c5-7135fe150f1e" (UID: "73d76595-42a6-4756-a5c5-7135fe150f1e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.198590 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "73d76595-42a6-4756-a5c5-7135fe150f1e" (UID: "73d76595-42a6-4756-a5c5-7135fe150f1e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.232417 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "73d76595-42a6-4756-a5c5-7135fe150f1e" (UID: "73d76595-42a6-4756-a5c5-7135fe150f1e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.298132 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.298115 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" event={"ID":"73d76595-42a6-4756-a5c5-7135fe150f1e","Type":"ContainerDied","Data":"551880a184d3cea9debb67a96e769d028b7329cfb831b90c16d9edf472195a6b"} Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.298366 5010 scope.go:117] "RemoveContainer" containerID="bcf09a9582e13a71a798c91df881d34f9629fd8355c0382e4f0464933e875d83" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.299079 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.299157 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.303193 5010 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.303269 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.303282 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.308167 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-config" (OuterVolumeSpecName: "config") pod "73d76595-42a6-4756-a5c5-7135fe150f1e" (UID: "73d76595-42a6-4756-a5c5-7135fe150f1e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.317868 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "73d76595-42a6-4756-a5c5-7135fe150f1e" (UID: "73d76595-42a6-4756-a5c5-7135fe150f1e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.373928 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.415483 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.415943 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.440798 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.535274 5010 scope.go:117] "RemoveContainer" containerID="2c19193a99dd2b89cf342b5374e508ef59ea58fbd9c5b83248ac4024b880fe95" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.622348 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/716318b2-6f04-4ff9-94c2-e107ebf51cb6-logs\") pod \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.622496 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/716318b2-6f04-4ff9-94c2-e107ebf51cb6-config-data\") pod \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.622642 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/716318b2-6f04-4ff9-94c2-e107ebf51cb6-scripts\") pod \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.622860 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/716318b2-6f04-4ff9-94c2-e107ebf51cb6-horizon-secret-key\") pod \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.623000 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d4dk\" (UniqueName: \"kubernetes.io/projected/716318b2-6f04-4ff9-94c2-e107ebf51cb6-kube-api-access-8d4dk\") pod \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.625455 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.626487 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/716318b2-6f04-4ff9-94c2-e107ebf51cb6-logs" (OuterVolumeSpecName: "logs") pod "716318b2-6f04-4ff9-94c2-e107ebf51cb6" (UID: "716318b2-6f04-4ff9-94c2-e107ebf51cb6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.627781 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/716318b2-6f04-4ff9-94c2-e107ebf51cb6-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.638749 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/716318b2-6f04-4ff9-94c2-e107ebf51cb6-kube-api-access-8d4dk" (OuterVolumeSpecName: "kube-api-access-8d4dk") pod "716318b2-6f04-4ff9-94c2-e107ebf51cb6" (UID: "716318b2-6f04-4ff9-94c2-e107ebf51cb6"). InnerVolumeSpecName "kube-api-access-8d4dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.645560 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-cxfv2"] Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.654557 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/716318b2-6f04-4ff9-94c2-e107ebf51cb6-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "716318b2-6f04-4ff9-94c2-e107ebf51cb6" (UID: "716318b2-6f04-4ff9-94c2-e107ebf51cb6"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.657108 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-cxfv2"] Feb 03 10:25:42 crc kubenswrapper[5010]: W0203 10:25:42.676384 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2608e076_ccd5_4d9b_9739_d2815655090e.slice/crio-8fc43be7c4e38eab87c6ce057e45c890d78c06e59c1c3f94eb288aeb3ef2742e WatchSource:0}: Error finding container 8fc43be7c4e38eab87c6ce057e45c890d78c06e59c1c3f94eb288aeb3ef2742e: Status 404 returned error can't find the container with id 8fc43be7c4e38eab87c6ce057e45c890d78c06e59c1c3f94eb288aeb3ef2742e Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.687561 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/716318b2-6f04-4ff9-94c2-e107ebf51cb6-config-data" (OuterVolumeSpecName: "config-data") pod "716318b2-6f04-4ff9-94c2-e107ebf51cb6" (UID: "716318b2-6f04-4ff9-94c2-e107ebf51cb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:42 crc kubenswrapper[5010]: E0203 10:25:42.688658 5010 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31521b0f_9e4f_4cfc_b0e8_e9e2bd2ca688.slice/crio-conmon-e0894a68073b3bd07b800e9f0879ea84ca668a89746cac6928280bad0a28dded.scope\": RecentStats: unable to find data in memory cache]" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.699407 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/716318b2-6f04-4ff9-94c2-e107ebf51cb6-scripts" (OuterVolumeSpecName: "scripts") pod "716318b2-6f04-4ff9-94c2-e107ebf51cb6" (UID: "716318b2-6f04-4ff9-94c2-e107ebf51cb6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.732777 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/716318b2-6f04-4ff9-94c2-e107ebf51cb6-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.733308 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/716318b2-6f04-4ff9-94c2-e107ebf51cb6-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.733320 5010 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/716318b2-6f04-4ff9-94c2-e107ebf51cb6-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.733352 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d4dk\" (UniqueName: \"kubernetes.io/projected/716318b2-6f04-4ff9-94c2-e107ebf51cb6-kube-api-access-8d4dk\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.815706 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cdcd56868-k9h7g" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.815836 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.817451 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"2cc2ce22d6ea86e28f6eb264d0d9c9e725b7685d6ab0fd02531064a6b9b028b0"} pod="openstack/horizon-7cdcd56868-k9h7g" containerMessage="Container horizon failed startup probe, will be restarted" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.817493 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7cdcd56868-k9h7g" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" containerID="cri-o://2cc2ce22d6ea86e28f6eb264d0d9c9e725b7685d6ab0fd02531064a6b9b028b0" gracePeriod=30 Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.022506 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6vbfz"] Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.041197 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.129415 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cc988db4-2mpfb" podUID="2fedcc57-b16c-4177-a10e-f627269b4adb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.129562 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.130945 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"45c56002ab101b0e77fc5934aa412e9d50c3e636af770ec4fe10888a673e7f7e"} pod="openstack/horizon-6cc988db4-2mpfb" containerMessage="Container horizon failed startup probe, will be restarted" Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.131003 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6cc988db4-2mpfb" podUID="2fedcc57-b16c-4177-a10e-f627269b4adb" containerName="horizon" containerID="cri-o://45c56002ab101b0e77fc5934aa412e9d50c3e636af770ec4fe10888a673e7f7e" gracePeriod=30 Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.323356 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2608e076-ccd5-4d9b-9739-d2815655090e","Type":"ContainerStarted","Data":"8fc43be7c4e38eab87c6ce057e45c890d78c06e59c1c3f94eb288aeb3ef2742e"} Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.344041 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b5b4c5ff-x859r" event={"ID":"716318b2-6f04-4ff9-94c2-e107ebf51cb6","Type":"ContainerDied","Data":"2db889447ff0bc0e6f1ca25bbfa660b5dc01678a634757b799ec80a5560e67e4"} Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.344117 5010 scope.go:117] "RemoveContainer" containerID="1e0c0b172a23175ded34e25aee553cea1577eb12ecd614b67b01f55633483ef4" Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.344279 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.367143 5010 generic.go:334] "Generic (PLEG): container finished" podID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" containerID="e0894a68073b3bd07b800e9f0879ea84ca668a89746cac6928280bad0a28dded" exitCode=0 Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.368547 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58c5b6f6cc-94dq7" event={"ID":"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688","Type":"ContainerDied","Data":"e0894a68073b3bd07b800e9f0879ea84ca668a89746cac6928280bad0a28dded"} Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.446611 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b5b4c5ff-x859r"] Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.484114 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5b5b4c5ff-x859r"] Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.496361 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-78c78c7889-r9575"] Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.613093 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-58c5b6f6cc-94dq7" podUID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.150:9696/\": dial tcp 10.217.0.150:9696: connect: connection refused" Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.750922 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 03 10:25:44 crc kubenswrapper[5010]: I0203 10:25:44.554325 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="716318b2-6f04-4ff9-94c2-e107ebf51cb6" path="/var/lib/kubelet/pods/716318b2-6f04-4ff9-94c2-e107ebf51cb6/volumes" Feb 03 10:25:44 crc kubenswrapper[5010]: I0203 10:25:44.555046 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73d76595-42a6-4756-a5c5-7135fe150f1e" path="/var/lib/kubelet/pods/73d76595-42a6-4756-a5c5-7135fe150f1e/volumes" Feb 03 10:25:46 crc kubenswrapper[5010]: I0203 10:25:46.172138 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:46 crc kubenswrapper[5010]: I0203 10:25:46.372931 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:47 crc kubenswrapper[5010]: I0203 10:25:47.479017 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:49 crc kubenswrapper[5010]: I0203 10:25:49.288422 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:49 crc kubenswrapper[5010]: I0203 10:25:49.289006 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:50 crc kubenswrapper[5010]: I0203 10:25:50.283184 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:50 crc kubenswrapper[5010]: I0203 10:25:50.366344 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-595698fff8-qzxdr"] Feb 03 10:25:50 crc kubenswrapper[5010]: I0203 10:25:50.366865 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-595698fff8-qzxdr" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" containerName="barbican-api-log" containerID="cri-o://e6b14e112fe4e444557f7a3aff312b5084d7db0d95368f7bd4f747a1a68cca9e" gracePeriod=30 Feb 03 10:25:50 crc kubenswrapper[5010]: I0203 10:25:50.367522 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-595698fff8-qzxdr" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" containerName="barbican-api" containerID="cri-o://a2e083c61dc7c9a5c3fac49824f7953d3fb85c8844f8a1f4ef14207348bfa1d9" gracePeriod=30 Feb 03 10:25:50 crc kubenswrapper[5010]: I0203 10:25:50.384580 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-595698fff8-qzxdr" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": EOF" Feb 03 10:25:50 crc kubenswrapper[5010]: I0203 10:25:50.384709 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-595698fff8-qzxdr" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": EOF" Feb 03 10:25:50 crc kubenswrapper[5010]: I0203 10:25:50.411443 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zcvn8" podUID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerName="registry-server" probeResult="failure" output=< Feb 03 10:25:50 crc kubenswrapper[5010]: timeout: failed to connect service ":50051" within 1s Feb 03 10:25:50 crc kubenswrapper[5010]: > Feb 03 10:25:50 crc kubenswrapper[5010]: I0203 10:25:50.611574 5010 generic.go:334] "Generic (PLEG): container finished" podID="34b3477b-06e6-4914-a048-54af2ebc0250" containerID="e6b14e112fe4e444557f7a3aff312b5084d7db0d95368f7bd4f747a1a68cca9e" exitCode=143 Feb 03 10:25:50 crc kubenswrapper[5010]: I0203 10:25:50.611707 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-595698fff8-qzxdr" event={"ID":"34b3477b-06e6-4914-a048-54af2ebc0250","Type":"ContainerDied","Data":"e6b14e112fe4e444557f7a3aff312b5084d7db0d95368f7bd4f747a1a68cca9e"} Feb 03 10:25:50 crc kubenswrapper[5010]: I0203 10:25:50.637250 5010 generic.go:334] "Generic (PLEG): container finished" podID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" containerID="f95d5f955943f1d6179b138d89e148c3a26347690a24c1fd2737b1cfd76d3955" exitCode=0 Feb 03 10:25:50 crc kubenswrapper[5010]: I0203 10:25:50.637424 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58c5b6f6cc-94dq7" event={"ID":"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688","Type":"ContainerDied","Data":"f95d5f955943f1d6179b138d89e148c3a26347690a24c1fd2737b1cfd76d3955"} Feb 03 10:25:52 crc kubenswrapper[5010]: I0203 10:25:52.993009 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:53 crc kubenswrapper[5010]: I0203 10:25:53.087166 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:54 crc kubenswrapper[5010]: W0203 10:25:54.236437 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod872497ad_02bf_48fd_9ef7_c39591cd0cf3.slice/crio-c4597e5fb6f0efc59bba027f6c62619a6af54fb50a6a0e89101889e721398156 WatchSource:0}: Error finding container c4597e5fb6f0efc59bba027f6c62619a6af54fb50a6a0e89101889e721398156: Status 404 returned error can't find the container with id c4597e5fb6f0efc59bba027f6c62619a6af54fb50a6a0e89101889e721398156 Feb 03 10:25:54 crc kubenswrapper[5010]: W0203 10:25:54.250607 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod158ac65e_849e_4f85_a4b6_1ac4bde1a1ec.slice/crio-315fe4b6a3bc1564af8b664feb3192140e44462ed97bc092ace115e8b833116f WatchSource:0}: Error finding container 315fe4b6a3bc1564af8b664feb3192140e44462ed97bc092ace115e8b833116f: Status 404 returned error can't find the container with id 315fe4b6a3bc1564af8b664feb3192140e44462ed97bc092ace115e8b833116f Feb 03 10:25:54 crc kubenswrapper[5010]: I0203 10:25:54.475851 5010 scope.go:117] "RemoveContainer" containerID="5ec57a7e44cc0f82c124057f7268cf9e4686f96d4ca8ba657715ac39cccda8e4" Feb 03 10:25:54 crc kubenswrapper[5010]: I0203 10:25:54.690616 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"872497ad-02bf-48fd-9ef7-c39591cd0cf3","Type":"ContainerStarted","Data":"c4597e5fb6f0efc59bba027f6c62619a6af54fb50a6a0e89101889e721398156"} Feb 03 10:25:54 crc kubenswrapper[5010]: I0203 10:25:54.692065 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78c78c7889-r9575" event={"ID":"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec","Type":"ContainerStarted","Data":"315fe4b6a3bc1564af8b664feb3192140e44462ed97bc092ace115e8b833116f"} Feb 03 10:25:54 crc kubenswrapper[5010]: I0203 10:25:54.696794 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" event={"ID":"b88c8b02-54df-4761-acc8-c959005f4444","Type":"ContainerStarted","Data":"2d51e4ddd011d0ec5a5a6ac940b6dc440f8c2ebbdfedfd082c8cf295f749780f"} Feb 03 10:25:55 crc kubenswrapper[5010]: E0203 10:25:55.204301 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 03 10:25:55 crc kubenswrapper[5010]: E0203 10:25:55.205063 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4rmrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(4338eb03-3ad6-4d68-8d8a-a37694aff6d7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 10:25:55 crc kubenswrapper[5010]: E0203 10:25:55.206310 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="4338eb03-3ad6-4d68-8d8a-a37694aff6d7" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.242615 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.347688 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-public-tls-certs\") pod \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.347771 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-httpd-config\") pod \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.347917 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-combined-ca-bundle\") pod \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.348124 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-ovndb-tls-certs\") pod \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.348186 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-config\") pod \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.348402 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-internal-tls-certs\") pod \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.348536 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnx67\" (UniqueName: \"kubernetes.io/projected/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-kube-api-access-bnx67\") pod \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.360827 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" (UID: "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.365996 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-kube-api-access-bnx67" (OuterVolumeSpecName: "kube-api-access-bnx67") pod "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" (UID: "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688"). InnerVolumeSpecName "kube-api-access-bnx67". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.434547 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" (UID: "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.443436 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" (UID: "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.453944 5010 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.454002 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnx67\" (UniqueName: \"kubernetes.io/projected/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-kube-api-access-bnx67\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.454019 5010 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.454036 5010 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.456069 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" (UID: "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.474279 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-config" (OuterVolumeSpecName: "config") pod "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" (UID: "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.514989 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" (UID: "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.556794 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.556977 5010 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.556990 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.732852 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4338eb03-3ad6-4d68-8d8a-a37694aff6d7" containerName="ceilometer-notification-agent" containerID="cri-o://d91d141426317acd31c21e9040c1e38df0008cc513ccacd6d4ecf8718788f6f7" gracePeriod=30 Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.733051 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.736125 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58c5b6f6cc-94dq7" event={"ID":"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688","Type":"ContainerDied","Data":"b27f611dc82e161f85b167c99dbce2d08eedaac7c3dd33e70725328f6c7d0a68"} Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.736320 5010 scope.go:117] "RemoveContainer" containerID="e0894a68073b3bd07b800e9f0879ea84ca668a89746cac6928280bad0a28dded" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.737439 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4338eb03-3ad6-4d68-8d8a-a37694aff6d7" containerName="sg-core" containerID="cri-o://66c74d715b2eacb41bf0f0e39922576ad416b3eb1d6ad6955ec6036858cd2f1d" gracePeriod=30 Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.755813 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.801578 5010 scope.go:117] "RemoveContainer" containerID="f95d5f955943f1d6179b138d89e148c3a26347690a24c1fd2737b1cfd76d3955" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.830177 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-58c5b6f6cc-94dq7"] Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.858047 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-58c5b6f6cc-94dq7"] Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.908725 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-595698fff8-qzxdr" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:33056->10.217.0.160:9311: read: connection reset by peer" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.909677 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-595698fff8-qzxdr" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:33054->10.217.0.160:9311: read: connection reset by peer" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.921407 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.014739 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7f744c8944-2zwzr"] Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.015040 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7f744c8944-2zwzr" podUID="8d6356a1-c07c-4d04-8d48-7f13a822ddf5" containerName="placement-log" containerID="cri-o://68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8" gracePeriod=30 Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.015533 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7f744c8944-2zwzr" podUID="8d6356a1-c07c-4d04-8d48-7f13a822ddf5" containerName="placement-api" containerID="cri-o://0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d" gracePeriod=30 Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.096170 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.535962 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" path="/var/lib/kubelet/pods/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688/volumes" Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.893815 5010 generic.go:334] "Generic (PLEG): container finished" podID="b88c8b02-54df-4761-acc8-c959005f4444" containerID="49ff5a76d40c8d3740c82b06df88f2bec310e05f57c31efe76c162d534248c50" exitCode=0 Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.894314 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" event={"ID":"b88c8b02-54df-4761-acc8-c959005f4444","Type":"ContainerDied","Data":"49ff5a76d40c8d3740c82b06df88f2bec310e05f57c31efe76c162d534248c50"} Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.900684 5010 generic.go:334] "Generic (PLEG): container finished" podID="4338eb03-3ad6-4d68-8d8a-a37694aff6d7" containerID="66c74d715b2eacb41bf0f0e39922576ad416b3eb1d6ad6955ec6036858cd2f1d" exitCode=2 Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.900760 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4338eb03-3ad6-4d68-8d8a-a37694aff6d7","Type":"ContainerDied","Data":"66c74d715b2eacb41bf0f0e39922576ad416b3eb1d6ad6955ec6036858cd2f1d"} Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.907046 5010 generic.go:334] "Generic (PLEG): container finished" podID="8d6356a1-c07c-4d04-8d48-7f13a822ddf5" containerID="68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8" exitCode=143 Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.907118 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f744c8944-2zwzr" event={"ID":"8d6356a1-c07c-4d04-8d48-7f13a822ddf5","Type":"ContainerDied","Data":"68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8"} Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.920239 5010 generic.go:334] "Generic (PLEG): container finished" podID="34b3477b-06e6-4914-a048-54af2ebc0250" containerID="a2e083c61dc7c9a5c3fac49824f7953d3fb85c8844f8a1f4ef14207348bfa1d9" exitCode=0 Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.920316 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-595698fff8-qzxdr" event={"ID":"34b3477b-06e6-4914-a048-54af2ebc0250","Type":"ContainerDied","Data":"a2e083c61dc7c9a5c3fac49824f7953d3fb85c8844f8a1f4ef14207348bfa1d9"} Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.054449 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78c78c7889-r9575" event={"ID":"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec","Type":"ContainerStarted","Data":"96438b6700091f1bab67b947cb73994cfe7b663ebf93f9a0880f7b75b38e3533"} Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.192877 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.287414 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-combined-ca-bundle\") pod \"34b3477b-06e6-4914-a048-54af2ebc0250\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.287923 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sz82\" (UniqueName: \"kubernetes.io/projected/34b3477b-06e6-4914-a048-54af2ebc0250-kube-api-access-8sz82\") pod \"34b3477b-06e6-4914-a048-54af2ebc0250\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.288100 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34b3477b-06e6-4914-a048-54af2ebc0250-logs\") pod \"34b3477b-06e6-4914-a048-54af2ebc0250\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.288136 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-config-data-custom\") pod \"34b3477b-06e6-4914-a048-54af2ebc0250\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.288166 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-config-data\") pod \"34b3477b-06e6-4914-a048-54af2ebc0250\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.288653 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34b3477b-06e6-4914-a048-54af2ebc0250-logs" (OuterVolumeSpecName: "logs") pod "34b3477b-06e6-4914-a048-54af2ebc0250" (UID: "34b3477b-06e6-4914-a048-54af2ebc0250"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.296020 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b3477b-06e6-4914-a048-54af2ebc0250-kube-api-access-8sz82" (OuterVolumeSpecName: "kube-api-access-8sz82") pod "34b3477b-06e6-4914-a048-54af2ebc0250" (UID: "34b3477b-06e6-4914-a048-54af2ebc0250"). InnerVolumeSpecName "kube-api-access-8sz82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.297558 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "34b3477b-06e6-4914-a048-54af2ebc0250" (UID: "34b3477b-06e6-4914-a048-54af2ebc0250"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.327584 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34b3477b-06e6-4914-a048-54af2ebc0250" (UID: "34b3477b-06e6-4914-a048-54af2ebc0250"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.363776 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-config-data" (OuterVolumeSpecName: "config-data") pod "34b3477b-06e6-4914-a048-54af2ebc0250" (UID: "34b3477b-06e6-4914-a048-54af2ebc0250"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.390677 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sz82\" (UniqueName: \"kubernetes.io/projected/34b3477b-06e6-4914-a048-54af2ebc0250-kube-api-access-8sz82\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.390726 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34b3477b-06e6-4914-a048-54af2ebc0250-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.390741 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.390753 5010 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.390766 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.823283 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 03 10:25:57 crc kubenswrapper[5010]: E0203 10:25:57.824017 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" containerName="barbican-api-log" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824034 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" containerName="barbican-api-log" Feb 03 10:25:57 crc kubenswrapper[5010]: E0203 10:25:57.824050 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="716318b2-6f04-4ff9-94c2-e107ebf51cb6" containerName="horizon-log" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824055 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="716318b2-6f04-4ff9-94c2-e107ebf51cb6" containerName="horizon-log" Feb 03 10:25:57 crc kubenswrapper[5010]: E0203 10:25:57.824071 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" containerName="barbican-api" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824077 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" containerName="barbican-api" Feb 03 10:25:57 crc kubenswrapper[5010]: E0203 10:25:57.824094 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" containerName="neutron-httpd" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824099 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" containerName="neutron-httpd" Feb 03 10:25:57 crc kubenswrapper[5010]: E0203 10:25:57.824115 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73d76595-42a6-4756-a5c5-7135fe150f1e" containerName="init" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824121 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="73d76595-42a6-4756-a5c5-7135fe150f1e" containerName="init" Feb 03 10:25:57 crc kubenswrapper[5010]: E0203 10:25:57.824128 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="716318b2-6f04-4ff9-94c2-e107ebf51cb6" containerName="horizon" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824135 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="716318b2-6f04-4ff9-94c2-e107ebf51cb6" containerName="horizon" Feb 03 10:25:57 crc kubenswrapper[5010]: E0203 10:25:57.824154 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73d76595-42a6-4756-a5c5-7135fe150f1e" containerName="dnsmasq-dns" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824159 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="73d76595-42a6-4756-a5c5-7135fe150f1e" containerName="dnsmasq-dns" Feb 03 10:25:57 crc kubenswrapper[5010]: E0203 10:25:57.824190 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" containerName="neutron-api" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824196 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" containerName="neutron-api" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824686 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" containerName="barbican-api-log" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824701 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="716318b2-6f04-4ff9-94c2-e107ebf51cb6" containerName="horizon" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824716 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" containerName="neutron-api" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824734 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" containerName="neutron-httpd" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824753 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" containerName="barbican-api" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824768 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="716318b2-6f04-4ff9-94c2-e107ebf51cb6" containerName="horizon-log" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824784 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="73d76595-42a6-4756-a5c5-7135fe150f1e" containerName="dnsmasq-dns" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.825511 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.828311 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-vzjq5" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.828519 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.829039 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.833395 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c80632c0-72bc-461d-8e87-591d0ddbc1a8-openstack-config\") pod \"openstackclient\" (UID: \"c80632c0-72bc-461d-8e87-591d0ddbc1a8\") " pod="openstack/openstackclient" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.833432 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c80632c0-72bc-461d-8e87-591d0ddbc1a8-openstack-config-secret\") pod \"openstackclient\" (UID: \"c80632c0-72bc-461d-8e87-591d0ddbc1a8\") " pod="openstack/openstackclient" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.833478 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nq64\" (UniqueName: \"kubernetes.io/projected/c80632c0-72bc-461d-8e87-591d0ddbc1a8-kube-api-access-9nq64\") pod \"openstackclient\" (UID: \"c80632c0-72bc-461d-8e87-591d0ddbc1a8\") " pod="openstack/openstackclient" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.833590 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c80632c0-72bc-461d-8e87-591d0ddbc1a8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c80632c0-72bc-461d-8e87-591d0ddbc1a8\") " pod="openstack/openstackclient" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.846103 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.935008 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c80632c0-72bc-461d-8e87-591d0ddbc1a8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c80632c0-72bc-461d-8e87-591d0ddbc1a8\") " pod="openstack/openstackclient" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.935116 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c80632c0-72bc-461d-8e87-591d0ddbc1a8-openstack-config\") pod \"openstackclient\" (UID: \"c80632c0-72bc-461d-8e87-591d0ddbc1a8\") " pod="openstack/openstackclient" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.935140 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c80632c0-72bc-461d-8e87-591d0ddbc1a8-openstack-config-secret\") pod \"openstackclient\" (UID: \"c80632c0-72bc-461d-8e87-591d0ddbc1a8\") " pod="openstack/openstackclient" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.935182 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nq64\" (UniqueName: \"kubernetes.io/projected/c80632c0-72bc-461d-8e87-591d0ddbc1a8-kube-api-access-9nq64\") pod \"openstackclient\" (UID: \"c80632c0-72bc-461d-8e87-591d0ddbc1a8\") " pod="openstack/openstackclient" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.942956 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c80632c0-72bc-461d-8e87-591d0ddbc1a8-openstack-config\") pod \"openstackclient\" (UID: \"c80632c0-72bc-461d-8e87-591d0ddbc1a8\") " pod="openstack/openstackclient" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.944387 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c80632c0-72bc-461d-8e87-591d0ddbc1a8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c80632c0-72bc-461d-8e87-591d0ddbc1a8\") " pod="openstack/openstackclient" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.945044 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c80632c0-72bc-461d-8e87-591d0ddbc1a8-openstack-config-secret\") pod \"openstackclient\" (UID: \"c80632c0-72bc-461d-8e87-591d0ddbc1a8\") " pod="openstack/openstackclient" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.961364 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nq64\" (UniqueName: \"kubernetes.io/projected/c80632c0-72bc-461d-8e87-591d0ddbc1a8-kube-api-access-9nq64\") pod \"openstackclient\" (UID: \"c80632c0-72bc-461d-8e87-591d0ddbc1a8\") " pod="openstack/openstackclient" Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.078115 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" event={"ID":"b88c8b02-54df-4761-acc8-c959005f4444","Type":"ContainerStarted","Data":"fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77"} Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.079539 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.082592 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2608e076-ccd5-4d9b-9739-d2815655090e","Type":"ContainerStarted","Data":"02b1b0db1e1d1490264d407bf569bd8135ae614f331340a7de745dc600379321"} Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.115270 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" podStartSLOduration=18.115238337 podStartE2EDuration="18.115238337s" podCreationTimestamp="2026-02-03 10:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:58.10333097 +0000 UTC m=+1428.259307119" watchObservedRunningTime="2026-02-03 10:25:58.115238337 +0000 UTC m=+1428.271214476" Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.125453 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-595698fff8-qzxdr" event={"ID":"34b3477b-06e6-4914-a048-54af2ebc0250","Type":"ContainerDied","Data":"276b5ede8be32b2fcd5e4dea2a354a0412bc1e3d512cddd2da2cb8731f6a5abd"} Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.125535 5010 scope.go:117] "RemoveContainer" containerID="a2e083c61dc7c9a5c3fac49824f7953d3fb85c8844f8a1f4ef14207348bfa1d9" Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.125624 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.132373 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"872497ad-02bf-48fd-9ef7-c39591cd0cf3","Type":"ContainerStarted","Data":"8f0a78e854f7929105346a11ec8aadfb8c983687a2549ad3dc08c8797e25a961"} Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.171549 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78c78c7889-r9575" event={"ID":"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec","Type":"ContainerStarted","Data":"6bd0c94c86ec6df8b63fc08a75b05e5f9fa252071bdab7ca204a7a1f441edd95"} Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.171945 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.173511 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.191711 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-595698fff8-qzxdr"] Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.220184 5010 scope.go:117] "RemoveContainer" containerID="e6b14e112fe4e444557f7a3aff312b5084d7db0d95368f7bd4f747a1a68cca9e" Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.222580 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-595698fff8-qzxdr"] Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.224284 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-78c78c7889-r9575" podStartSLOduration=17.2242652 podStartE2EDuration="17.2242652s" podCreationTimestamp="2026-02-03 10:25:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:58.212285654 +0000 UTC m=+1428.368261803" watchObservedRunningTime="2026-02-03 10:25:58.2242652 +0000 UTC m=+1428.380241339" Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.536294 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" path="/var/lib/kubelet/pods/34b3477b-06e6-4914-a048-54af2ebc0250/volumes" Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.853073 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 03 10:25:59 crc kubenswrapper[5010]: I0203 10:25:59.189956 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"872497ad-02bf-48fd-9ef7-c39591cd0cf3","Type":"ContainerStarted","Data":"6c176208520e0e4aa9ea320d1edfe8ab83a7718fb33505386deba54305a99180"} Feb 03 10:25:59 crc kubenswrapper[5010]: I0203 10:25:59.190145 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 03 10:25:59 crc kubenswrapper[5010]: I0203 10:25:59.190141 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="872497ad-02bf-48fd-9ef7-c39591cd0cf3" containerName="cinder-api-log" containerID="cri-o://8f0a78e854f7929105346a11ec8aadfb8c983687a2549ad3dc08c8797e25a961" gracePeriod=30 Feb 03 10:25:59 crc kubenswrapper[5010]: I0203 10:25:59.190162 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="872497ad-02bf-48fd-9ef7-c39591cd0cf3" containerName="cinder-api" containerID="cri-o://6c176208520e0e4aa9ea320d1edfe8ab83a7718fb33505386deba54305a99180" gracePeriod=30 Feb 03 10:25:59 crc kubenswrapper[5010]: I0203 10:25:59.198259 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2608e076-ccd5-4d9b-9739-d2815655090e","Type":"ContainerStarted","Data":"9afac37147605919491f382bbfc27637b26db8fa47e1eb9f1d9454af8578414f"} Feb 03 10:25:59 crc kubenswrapper[5010]: I0203 10:25:59.200472 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c80632c0-72bc-461d-8e87-591d0ddbc1a8","Type":"ContainerStarted","Data":"faae3cfb1a25e4d794ba91c5f847593fa8dd9af5786ff41a891cf150c042447d"} Feb 03 10:25:59 crc kubenswrapper[5010]: I0203 10:25:59.229791 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=19.229762069 podStartE2EDuration="19.229762069s" podCreationTimestamp="2026-02-03 10:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:59.218979492 +0000 UTC m=+1429.374955621" watchObservedRunningTime="2026-02-03 10:25:59.229762069 +0000 UTC m=+1429.385738198" Feb 03 10:25:59 crc kubenswrapper[5010]: I0203 10:25:59.252444 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.4051715080000005 podStartE2EDuration="19.252401101s" podCreationTimestamp="2026-02-03 10:25:40 +0000 UTC" firstStartedPulling="2026-02-03 10:25:42.688199698 +0000 UTC m=+1412.844175827" lastFinishedPulling="2026-02-03 10:25:55.535429291 +0000 UTC m=+1425.691405420" observedRunningTime="2026-02-03 10:25:59.245953445 +0000 UTC m=+1429.401929584" watchObservedRunningTime="2026-02-03 10:25:59.252401101 +0000 UTC m=+1429.408377250" Feb 03 10:25:59 crc kubenswrapper[5010]: I0203 10:25:59.940934 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.112077 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-scripts\") pod \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.112340 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-combined-ca-bundle\") pod \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.112393 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj8c4\" (UniqueName: \"kubernetes.io/projected/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-kube-api-access-rj8c4\") pod \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.112506 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-config-data\") pod \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.112563 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-public-tls-certs\") pod \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.112657 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-logs\") pod \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.112707 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-internal-tls-certs\") pod \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.115571 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-logs" (OuterVolumeSpecName: "logs") pod "8d6356a1-c07c-4d04-8d48-7f13a822ddf5" (UID: "8d6356a1-c07c-4d04-8d48-7f13a822ddf5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.144916 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-scripts" (OuterVolumeSpecName: "scripts") pod "8d6356a1-c07c-4d04-8d48-7f13a822ddf5" (UID: "8d6356a1-c07c-4d04-8d48-7f13a822ddf5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.153592 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-kube-api-access-rj8c4" (OuterVolumeSpecName: "kube-api-access-rj8c4") pod "8d6356a1-c07c-4d04-8d48-7f13a822ddf5" (UID: "8d6356a1-c07c-4d04-8d48-7f13a822ddf5"). InnerVolumeSpecName "kube-api-access-rj8c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.220194 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.220271 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj8c4\" (UniqueName: \"kubernetes.io/projected/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-kube-api-access-rj8c4\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.220289 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.246389 5010 generic.go:334] "Generic (PLEG): container finished" podID="4338eb03-3ad6-4d68-8d8a-a37694aff6d7" containerID="d91d141426317acd31c21e9040c1e38df0008cc513ccacd6d4ecf8718788f6f7" exitCode=0 Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.246546 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4338eb03-3ad6-4d68-8d8a-a37694aff6d7","Type":"ContainerDied","Data":"d91d141426317acd31c21e9040c1e38df0008cc513ccacd6d4ecf8718788f6f7"} Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.257139 5010 generic.go:334] "Generic (PLEG): container finished" podID="8d6356a1-c07c-4d04-8d48-7f13a822ddf5" containerID="0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d" exitCode=0 Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.257300 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f744c8944-2zwzr" event={"ID":"8d6356a1-c07c-4d04-8d48-7f13a822ddf5","Type":"ContainerDied","Data":"0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d"} Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.257389 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f744c8944-2zwzr" event={"ID":"8d6356a1-c07c-4d04-8d48-7f13a822ddf5","Type":"ContainerDied","Data":"089e9b9bfea0632f8dc13a626391ff9a317374bb6a62f576e2749c15e06ebc0d"} Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.257415 5010 scope.go:117] "RemoveContainer" containerID="0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.257810 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.260506 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-config-data" (OuterVolumeSpecName: "config-data") pod "8d6356a1-c07c-4d04-8d48-7f13a822ddf5" (UID: "8d6356a1-c07c-4d04-8d48-7f13a822ddf5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.278563 5010 generic.go:334] "Generic (PLEG): container finished" podID="872497ad-02bf-48fd-9ef7-c39591cd0cf3" containerID="6c176208520e0e4aa9ea320d1edfe8ab83a7718fb33505386deba54305a99180" exitCode=0 Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.278631 5010 generic.go:334] "Generic (PLEG): container finished" podID="872497ad-02bf-48fd-9ef7-c39591cd0cf3" containerID="8f0a78e854f7929105346a11ec8aadfb8c983687a2549ad3dc08c8797e25a961" exitCode=143 Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.279104 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"872497ad-02bf-48fd-9ef7-c39591cd0cf3","Type":"ContainerDied","Data":"6c176208520e0e4aa9ea320d1edfe8ab83a7718fb33505386deba54305a99180"} Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.279233 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"872497ad-02bf-48fd-9ef7-c39591cd0cf3","Type":"ContainerDied","Data":"8f0a78e854f7929105346a11ec8aadfb8c983687a2549ad3dc08c8797e25a961"} Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.325093 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.329702 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.336342 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d6356a1-c07c-4d04-8d48-7f13a822ddf5" (UID: "8d6356a1-c07c-4d04-8d48-7f13a822ddf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.364245 5010 scope.go:117] "RemoveContainer" containerID="68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.403904 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8d6356a1-c07c-4d04-8d48-7f13a822ddf5" (UID: "8d6356a1-c07c-4d04-8d48-7f13a822ddf5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.413372 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8d6356a1-c07c-4d04-8d48-7f13a822ddf5" (UID: "8d6356a1-c07c-4d04-8d48-7f13a822ddf5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.416700 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zcvn8" podUID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerName="registry-server" probeResult="failure" output=< Feb 03 10:26:00 crc kubenswrapper[5010]: timeout: failed to connect service ":50051" within 1s Feb 03 10:26:00 crc kubenswrapper[5010]: > Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.428323 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.430416 5010 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.430435 5010 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.532312 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-combined-ca-bundle\") pod \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.532451 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-config-data\") pod \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.532553 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-scripts\") pod \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.532660 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-sg-core-conf-yaml\") pod \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.532757 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-run-httpd\") pod \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.532789 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-log-httpd\") pod \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.532882 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rmrl\" (UniqueName: \"kubernetes.io/projected/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-kube-api-access-4rmrl\") pod \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.553941 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-scripts" (OuterVolumeSpecName: "scripts") pod "4338eb03-3ad6-4d68-8d8a-a37694aff6d7" (UID: "4338eb03-3ad6-4d68-8d8a-a37694aff6d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.554552 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-kube-api-access-4rmrl" (OuterVolumeSpecName: "kube-api-access-4rmrl") pod "4338eb03-3ad6-4d68-8d8a-a37694aff6d7" (UID: "4338eb03-3ad6-4d68-8d8a-a37694aff6d7"). InnerVolumeSpecName "kube-api-access-4rmrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.566454 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.567595 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4338eb03-3ad6-4d68-8d8a-a37694aff6d7" (UID: "4338eb03-3ad6-4d68-8d8a-a37694aff6d7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.567851 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4338eb03-3ad6-4d68-8d8a-a37694aff6d7" (UID: "4338eb03-3ad6-4d68-8d8a-a37694aff6d7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.588988 5010 scope.go:117] "RemoveContainer" containerID="0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d" Feb 03 10:26:00 crc kubenswrapper[5010]: E0203 10:26:00.598553 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d\": container with ID starting with 0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d not found: ID does not exist" containerID="0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.598630 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d"} err="failed to get container status \"0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d\": rpc error: code = NotFound desc = could not find container \"0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d\": container with ID starting with 0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d not found: ID does not exist" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.598668 5010 scope.go:117] "RemoveContainer" containerID="68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8" Feb 03 10:26:00 crc kubenswrapper[5010]: E0203 10:26:00.611072 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8\": container with ID starting with 68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8 not found: ID does not exist" containerID="68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.611157 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8"} err="failed to get container status \"68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8\": rpc error: code = NotFound desc = could not find container \"68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8\": container with ID starting with 68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8 not found: ID does not exist" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.611735 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4338eb03-3ad6-4d68-8d8a-a37694aff6d7" (UID: "4338eb03-3ad6-4d68-8d8a-a37694aff6d7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.635928 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.635980 5010 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.635992 5010 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.636003 5010 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.636015 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rmrl\" (UniqueName: \"kubernetes.io/projected/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-kube-api-access-4rmrl\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.636977 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-config-data" (OuterVolumeSpecName: "config-data") pod "4338eb03-3ad6-4d68-8d8a-a37694aff6d7" (UID: "4338eb03-3ad6-4d68-8d8a-a37694aff6d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.672716 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4338eb03-3ad6-4d68-8d8a-a37694aff6d7" (UID: "4338eb03-3ad6-4d68-8d8a-a37694aff6d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.737404 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-scripts\") pod \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.737601 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvk2j\" (UniqueName: \"kubernetes.io/projected/872497ad-02bf-48fd-9ef7-c39591cd0cf3-kube-api-access-kvk2j\") pod \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.737679 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-config-data-custom\") pod \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.737797 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/872497ad-02bf-48fd-9ef7-c39591cd0cf3-logs\") pod \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.737867 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/872497ad-02bf-48fd-9ef7-c39591cd0cf3-etc-machine-id\") pod \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.738071 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-combined-ca-bundle\") pod \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.738146 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-config-data\") pod \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.738893 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/872497ad-02bf-48fd-9ef7-c39591cd0cf3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "872497ad-02bf-48fd-9ef7-c39591cd0cf3" (UID: "872497ad-02bf-48fd-9ef7-c39591cd0cf3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.739049 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/872497ad-02bf-48fd-9ef7-c39591cd0cf3-logs" (OuterVolumeSpecName: "logs") pod "872497ad-02bf-48fd-9ef7-c39591cd0cf3" (UID: "872497ad-02bf-48fd-9ef7-c39591cd0cf3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.739439 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.739460 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.739472 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/872497ad-02bf-48fd-9ef7-c39591cd0cf3-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.739485 5010 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/872497ad-02bf-48fd-9ef7-c39591cd0cf3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.758553 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "872497ad-02bf-48fd-9ef7-c39591cd0cf3" (UID: "872497ad-02bf-48fd-9ef7-c39591cd0cf3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.771431 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-scripts" (OuterVolumeSpecName: "scripts") pod "872497ad-02bf-48fd-9ef7-c39591cd0cf3" (UID: "872497ad-02bf-48fd-9ef7-c39591cd0cf3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.771537 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/872497ad-02bf-48fd-9ef7-c39591cd0cf3-kube-api-access-kvk2j" (OuterVolumeSpecName: "kube-api-access-kvk2j") pod "872497ad-02bf-48fd-9ef7-c39591cd0cf3" (UID: "872497ad-02bf-48fd-9ef7-c39591cd0cf3"). InnerVolumeSpecName "kube-api-access-kvk2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.773492 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7f744c8944-2zwzr"] Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.785722 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7f744c8944-2zwzr"] Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.802344 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "872497ad-02bf-48fd-9ef7-c39591cd0cf3" (UID: "872497ad-02bf-48fd-9ef7-c39591cd0cf3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.823380 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.834099 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-config-data" (OuterVolumeSpecName: "config-data") pod "872497ad-02bf-48fd-9ef7-c39591cd0cf3" (UID: "872497ad-02bf-48fd-9ef7-c39591cd0cf3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.842221 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.842691 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.842892 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.843077 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvk2j\" (UniqueName: \"kubernetes.io/projected/872497ad-02bf-48fd-9ef7-c39591cd0cf3-kube-api-access-kvk2j\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.843140 5010 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.299574 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.299553 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4338eb03-3ad6-4d68-8d8a-a37694aff6d7","Type":"ContainerDied","Data":"61a59197d7bdf8ea63d4d37b8f71bb48f78f9037194046295bca9711dd2a3194"} Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.299773 5010 scope.go:117] "RemoveContainer" containerID="66c74d715b2eacb41bf0f0e39922576ad416b3eb1d6ad6955ec6036858cd2f1d" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.312073 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"872497ad-02bf-48fd-9ef7-c39591cd0cf3","Type":"ContainerDied","Data":"c4597e5fb6f0efc59bba027f6c62619a6af54fb50a6a0e89101889e721398156"} Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.312107 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.336383 5010 scope.go:117] "RemoveContainer" containerID="d91d141426317acd31c21e9040c1e38df0008cc513ccacd6d4ecf8718788f6f7" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.431307 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.468317 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.480671 5010 scope.go:117] "RemoveContainer" containerID="6c176208520e0e4aa9ea320d1edfe8ab83a7718fb33505386deba54305a99180" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.493610 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 03 10:26:01 crc kubenswrapper[5010]: E0203 10:26:01.494343 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="872497ad-02bf-48fd-9ef7-c39591cd0cf3" containerName="cinder-api-log" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.494373 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="872497ad-02bf-48fd-9ef7-c39591cd0cf3" containerName="cinder-api-log" Feb 03 10:26:01 crc kubenswrapper[5010]: E0203 10:26:01.494395 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4338eb03-3ad6-4d68-8d8a-a37694aff6d7" containerName="sg-core" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.494404 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4338eb03-3ad6-4d68-8d8a-a37694aff6d7" containerName="sg-core" Feb 03 10:26:01 crc kubenswrapper[5010]: E0203 10:26:01.494413 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="872497ad-02bf-48fd-9ef7-c39591cd0cf3" containerName="cinder-api" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.494421 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="872497ad-02bf-48fd-9ef7-c39591cd0cf3" containerName="cinder-api" Feb 03 10:26:01 crc kubenswrapper[5010]: E0203 10:26:01.494477 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4338eb03-3ad6-4d68-8d8a-a37694aff6d7" containerName="ceilometer-notification-agent" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.494489 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4338eb03-3ad6-4d68-8d8a-a37694aff6d7" containerName="ceilometer-notification-agent" Feb 03 10:26:01 crc kubenswrapper[5010]: E0203 10:26:01.494505 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d6356a1-c07c-4d04-8d48-7f13a822ddf5" containerName="placement-log" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.494514 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6356a1-c07c-4d04-8d48-7f13a822ddf5" containerName="placement-log" Feb 03 10:26:01 crc kubenswrapper[5010]: E0203 10:26:01.494539 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d6356a1-c07c-4d04-8d48-7f13a822ddf5" containerName="placement-api" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.494548 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6356a1-c07c-4d04-8d48-7f13a822ddf5" containerName="placement-api" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.494813 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4338eb03-3ad6-4d68-8d8a-a37694aff6d7" containerName="sg-core" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.494840 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="872497ad-02bf-48fd-9ef7-c39591cd0cf3" containerName="cinder-api-log" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.494859 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d6356a1-c07c-4d04-8d48-7f13a822ddf5" containerName="placement-log" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.494876 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="872497ad-02bf-48fd-9ef7-c39591cd0cf3" containerName="cinder-api" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.494895 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d6356a1-c07c-4d04-8d48-7f13a822ddf5" containerName="placement-api" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.494907 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4338eb03-3ad6-4d68-8d8a-a37694aff6d7" containerName="ceilometer-notification-agent" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.499539 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.513837 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.514121 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.514349 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.547811 5010 scope.go:117] "RemoveContainer" containerID="8f0a78e854f7929105346a11ec8aadfb8c983687a2549ad3dc08c8797e25a961" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.554265 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.603668 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.623494 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.637381 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.656357 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.667665 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.668015 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.687754 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.687889 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-config-data-custom\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.687943 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e079d37-86a2-4be8-a16b-821095c780f0-logs\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.688064 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gbbw\" (UniqueName: \"kubernetes.io/projected/7e079d37-86a2-4be8-a16b-821095c780f0-kube-api-access-7gbbw\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.688288 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-config-data\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.688410 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-scripts\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.688456 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.688879 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e079d37-86a2-4be8-a16b-821095c780f0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.690712 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.719306 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.795800 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-config-data\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.795924 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-scripts\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.795967 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796047 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-scripts\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796081 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4909daad-030c-436e-acf5-2405a74d8180-log-httpd\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796128 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796191 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e079d37-86a2-4be8-a16b-821095c780f0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796266 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796312 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796353 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vs4n\" (UniqueName: \"kubernetes.io/projected/4909daad-030c-436e-acf5-2405a74d8180-kube-api-access-4vs4n\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796391 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-config-data-custom\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796430 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e079d37-86a2-4be8-a16b-821095c780f0-logs\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796483 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-config-data\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796517 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796606 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gbbw\" (UniqueName: \"kubernetes.io/projected/7e079d37-86a2-4be8-a16b-821095c780f0-kube-api-access-7gbbw\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796679 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4909daad-030c-436e-acf5-2405a74d8180-run-httpd\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.800695 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e079d37-86a2-4be8-a16b-821095c780f0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.802569 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e079d37-86a2-4be8-a16b-821095c780f0-logs\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.805684 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-config-data-custom\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.806374 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-scripts\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.809315 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-config-data\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.812527 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.814607 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.823376 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gbbw\" (UniqueName: \"kubernetes.io/projected/7e079d37-86a2-4be8-a16b-821095c780f0-kube-api-access-7gbbw\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.826779 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.847436 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.900008 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vs4n\" (UniqueName: \"kubernetes.io/projected/4909daad-030c-436e-acf5-2405a74d8180-kube-api-access-4vs4n\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.900108 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-config-data\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.900148 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.900248 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4909daad-030c-436e-acf5-2405a74d8180-run-httpd\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.900476 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-scripts\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.900501 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4909daad-030c-436e-acf5-2405a74d8180-log-httpd\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.900547 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.902770 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4909daad-030c-436e-acf5-2405a74d8180-log-httpd\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.903128 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4909daad-030c-436e-acf5-2405a74d8180-run-httpd\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.906788 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.909289 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-scripts\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.912346 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.912960 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-config-data\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.928752 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vs4n\" (UniqueName: \"kubernetes.io/projected/4909daad-030c-436e-acf5-2405a74d8180-kube-api-access-4vs4n\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:02 crc kubenswrapper[5010]: I0203 10:26:02.009813 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:26:02 crc kubenswrapper[5010]: I0203 10:26:02.448530 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 03 10:26:02 crc kubenswrapper[5010]: I0203 10:26:02.521777 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4338eb03-3ad6-4d68-8d8a-a37694aff6d7" path="/var/lib/kubelet/pods/4338eb03-3ad6-4d68-8d8a-a37694aff6d7/volumes" Feb 03 10:26:02 crc kubenswrapper[5010]: I0203 10:26:02.524520 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="872497ad-02bf-48fd-9ef7-c39591cd0cf3" path="/var/lib/kubelet/pods/872497ad-02bf-48fd-9ef7-c39591cd0cf3/volumes" Feb 03 10:26:02 crc kubenswrapper[5010]: I0203 10:26:02.526241 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d6356a1-c07c-4d04-8d48-7f13a822ddf5" path="/var/lib/kubelet/pods/8d6356a1-c07c-4d04-8d48-7f13a822ddf5/volumes" Feb 03 10:26:02 crc kubenswrapper[5010]: I0203 10:26:02.667763 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:26:02 crc kubenswrapper[5010]: W0203 10:26:02.692096 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4909daad_030c_436e_acf5_2405a74d8180.slice/crio-9bf689dea05fc0f3ed74b115d13e839aab5eee31fcc1462d9040ce5ddfa67010 WatchSource:0}: Error finding container 9bf689dea05fc0f3ed74b115d13e839aab5eee31fcc1462d9040ce5ddfa67010: Status 404 returned error can't find the container with id 9bf689dea05fc0f3ed74b115d13e839aab5eee31fcc1462d9040ce5ddfa67010 Feb 03 10:26:03 crc kubenswrapper[5010]: I0203 10:26:03.416398 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7e079d37-86a2-4be8-a16b-821095c780f0","Type":"ContainerStarted","Data":"244db2c4c114273555c75c4cb333f4b696198bb58fac76777ecd9f7aee8092e2"} Feb 03 10:26:03 crc kubenswrapper[5010]: I0203 10:26:03.417102 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7e079d37-86a2-4be8-a16b-821095c780f0","Type":"ContainerStarted","Data":"d326110758b57899bbb3402e1c571879c314d13619e61b251c6e77d898282b07"} Feb 03 10:26:03 crc kubenswrapper[5010]: I0203 10:26:03.418278 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4909daad-030c-436e-acf5-2405a74d8180","Type":"ContainerStarted","Data":"9bf689dea05fc0f3ed74b115d13e839aab5eee31fcc1462d9040ce5ddfa67010"} Feb 03 10:26:04 crc kubenswrapper[5010]: I0203 10:26:04.440735 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7e079d37-86a2-4be8-a16b-821095c780f0","Type":"ContainerStarted","Data":"11c7a18a7c87397a4d54959b8f03343950c2f98b1dfd593b5d45bef5ac9adf81"} Feb 03 10:26:04 crc kubenswrapper[5010]: I0203 10:26:04.441451 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 03 10:26:04 crc kubenswrapper[5010]: I0203 10:26:04.455760 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4909daad-030c-436e-acf5-2405a74d8180","Type":"ContainerStarted","Data":"4198ce459a693b38bf47283f126a3f929ce83d42492541b2b961db5cda2701f4"} Feb 03 10:26:04 crc kubenswrapper[5010]: I0203 10:26:04.477176 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.477144578 podStartE2EDuration="3.477144578s" podCreationTimestamp="2026-02-03 10:26:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:26:04.465984611 +0000 UTC m=+1434.621960740" watchObservedRunningTime="2026-02-03 10:26:04.477144578 +0000 UTC m=+1434.633120707" Feb 03 10:26:05 crc kubenswrapper[5010]: I0203 10:26:05.526184 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4909daad-030c-436e-acf5-2405a74d8180","Type":"ContainerStarted","Data":"1bd8603024a229914190fc469345835e8b37de52fd7f1951f53bc0059a29de92"} Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.095465 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.139614 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.153975 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.230143 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-v4m78"] Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.230527 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" podUID="800c4356-da72-47c4-9a83-5eeceacc7211" containerName="dnsmasq-dns" containerID="cri-o://d1764054e077cd4256f8f822597e57237fec354ad2e79a0451fb06420764c4a9" gracePeriod=10 Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.576955 5010 generic.go:334] "Generic (PLEG): container finished" podID="800c4356-da72-47c4-9a83-5eeceacc7211" containerID="d1764054e077cd4256f8f822597e57237fec354ad2e79a0451fb06420764c4a9" exitCode=0 Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.577595 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" event={"ID":"800c4356-da72-47c4-9a83-5eeceacc7211","Type":"ContainerDied","Data":"d1764054e077cd4256f8f822597e57237fec354ad2e79a0451fb06420764c4a9"} Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.614642 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2608e076-ccd5-4d9b-9739-d2815655090e" containerName="cinder-scheduler" containerID="cri-o://02b1b0db1e1d1490264d407bf569bd8135ae614f331340a7de745dc600379321" gracePeriod=30 Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.614764 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2608e076-ccd5-4d9b-9739-d2815655090e" containerName="probe" containerID="cri-o://9afac37147605919491f382bbfc27637b26db8fa47e1eb9f1d9454af8578414f" gracePeriod=30 Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.614627 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4909daad-030c-436e-acf5-2405a74d8180","Type":"ContainerStarted","Data":"67d6ea389313e14d97c8b6c045808e3c44adad70ca29d47d5585704fabd03630"} Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.929029 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.978402 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-dns-svc\") pod \"800c4356-da72-47c4-9a83-5eeceacc7211\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.978848 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-dns-swift-storage-0\") pod \"800c4356-da72-47c4-9a83-5eeceacc7211\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.979035 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-ovsdbserver-sb\") pod \"800c4356-da72-47c4-9a83-5eeceacc7211\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.979371 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-ovsdbserver-nb\") pod \"800c4356-da72-47c4-9a83-5eeceacc7211\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.979505 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54blj\" (UniqueName: \"kubernetes.io/projected/800c4356-da72-47c4-9a83-5eeceacc7211-kube-api-access-54blj\") pod \"800c4356-da72-47c4-9a83-5eeceacc7211\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.979712 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-config\") pod \"800c4356-da72-47c4-9a83-5eeceacc7211\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.113648 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/800c4356-da72-47c4-9a83-5eeceacc7211-kube-api-access-54blj" (OuterVolumeSpecName: "kube-api-access-54blj") pod "800c4356-da72-47c4-9a83-5eeceacc7211" (UID: "800c4356-da72-47c4-9a83-5eeceacc7211"). InnerVolumeSpecName "kube-api-access-54blj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.174503 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "800c4356-da72-47c4-9a83-5eeceacc7211" (UID: "800c4356-da72-47c4-9a83-5eeceacc7211"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.192521 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "800c4356-da72-47c4-9a83-5eeceacc7211" (UID: "800c4356-da72-47c4-9a83-5eeceacc7211"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.209158 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-config" (OuterVolumeSpecName: "config") pod "800c4356-da72-47c4-9a83-5eeceacc7211" (UID: "800c4356-da72-47c4-9a83-5eeceacc7211"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.212422 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.212479 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54blj\" (UniqueName: \"kubernetes.io/projected/800c4356-da72-47c4-9a83-5eeceacc7211-kube-api-access-54blj\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.212500 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.212512 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.231028 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "800c4356-da72-47c4-9a83-5eeceacc7211" (UID: "800c4356-da72-47c4-9a83-5eeceacc7211"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.276017 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "800c4356-da72-47c4-9a83-5eeceacc7211" (UID: "800c4356-da72-47c4-9a83-5eeceacc7211"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.314701 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.314751 5010 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.543939 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7594db59b7-8cg94"] Feb 03 10:26:07 crc kubenswrapper[5010]: E0203 10:26:07.546081 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800c4356-da72-47c4-9a83-5eeceacc7211" containerName="dnsmasq-dns" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.546122 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="800c4356-da72-47c4-9a83-5eeceacc7211" containerName="dnsmasq-dns" Feb 03 10:26:07 crc kubenswrapper[5010]: E0203 10:26:07.546197 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800c4356-da72-47c4-9a83-5eeceacc7211" containerName="init" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.546207 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="800c4356-da72-47c4-9a83-5eeceacc7211" containerName="init" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.546558 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="800c4356-da72-47c4-9a83-5eeceacc7211" containerName="dnsmasq-dns" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.554130 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.561609 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.561610 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.562199 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.586831 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7594db59b7-8cg94"] Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.735403 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" event={"ID":"800c4356-da72-47c4-9a83-5eeceacc7211","Type":"ContainerDied","Data":"a39cc9b17b280be33534b557e14c9c1d9f99cb76acef07ae259bc5d74339aa49"} Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.735514 5010 scope.go:117] "RemoveContainer" containerID="d1764054e077cd4256f8f822597e57237fec354ad2e79a0451fb06420764c4a9" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.736235 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.791182 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-v4m78"] Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.800188 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0d01af0-abb7-4cd1-92d7-d741182948f9-config-data\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.800339 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a0d01af0-abb7-4cd1-92d7-d741182948f9-etc-swift\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.800367 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0d01af0-abb7-4cd1-92d7-d741182948f9-run-httpd\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.800412 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhnnp\" (UniqueName: \"kubernetes.io/projected/a0d01af0-abb7-4cd1-92d7-d741182948f9-kube-api-access-qhnnp\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.801721 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0d01af0-abb7-4cd1-92d7-d741182948f9-internal-tls-certs\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.803238 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-v4m78"] Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.803636 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d01af0-abb7-4cd1-92d7-d741182948f9-combined-ca-bundle\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.803833 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0d01af0-abb7-4cd1-92d7-d741182948f9-public-tls-certs\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.804104 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0d01af0-abb7-4cd1-92d7-d741182948f9-log-httpd\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.907005 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0d01af0-abb7-4cd1-92d7-d741182948f9-internal-tls-certs\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.907093 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d01af0-abb7-4cd1-92d7-d741182948f9-combined-ca-bundle\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.907153 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0d01af0-abb7-4cd1-92d7-d741182948f9-public-tls-certs\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.907353 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0d01af0-abb7-4cd1-92d7-d741182948f9-log-httpd\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.907432 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0d01af0-abb7-4cd1-92d7-d741182948f9-config-data\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.907462 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a0d01af0-abb7-4cd1-92d7-d741182948f9-etc-swift\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.907486 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0d01af0-abb7-4cd1-92d7-d741182948f9-run-httpd\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.907518 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhnnp\" (UniqueName: \"kubernetes.io/projected/a0d01af0-abb7-4cd1-92d7-d741182948f9-kube-api-access-qhnnp\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.908780 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0d01af0-abb7-4cd1-92d7-d741182948f9-log-httpd\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.909015 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0d01af0-abb7-4cd1-92d7-d741182948f9-run-httpd\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.918629 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0d01af0-abb7-4cd1-92d7-d741182948f9-public-tls-certs\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.918876 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0d01af0-abb7-4cd1-92d7-d741182948f9-config-data\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.919837 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d01af0-abb7-4cd1-92d7-d741182948f9-combined-ca-bundle\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.920113 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0d01af0-abb7-4cd1-92d7-d741182948f9-internal-tls-certs\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.934580 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a0d01af0-abb7-4cd1-92d7-d741182948f9-etc-swift\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.939059 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhnnp\" (UniqueName: \"kubernetes.io/projected/a0d01af0-abb7-4cd1-92d7-d741182948f9-kube-api-access-qhnnp\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:08 crc kubenswrapper[5010]: I0203 10:26:08.188684 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:08 crc kubenswrapper[5010]: I0203 10:26:08.520898 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="800c4356-da72-47c4-9a83-5eeceacc7211" path="/var/lib/kubelet/pods/800c4356-da72-47c4-9a83-5eeceacc7211/volumes" Feb 03 10:26:08 crc kubenswrapper[5010]: I0203 10:26:08.772014 5010 generic.go:334] "Generic (PLEG): container finished" podID="2608e076-ccd5-4d9b-9739-d2815655090e" containerID="9afac37147605919491f382bbfc27637b26db8fa47e1eb9f1d9454af8578414f" exitCode=0 Feb 03 10:26:08 crc kubenswrapper[5010]: I0203 10:26:08.772107 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2608e076-ccd5-4d9b-9739-d2815655090e","Type":"ContainerDied","Data":"9afac37147605919491f382bbfc27637b26db8fa47e1eb9f1d9454af8578414f"} Feb 03 10:26:09 crc kubenswrapper[5010]: I0203 10:26:09.354908 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:26:09 crc kubenswrapper[5010]: I0203 10:26:09.420836 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:26:09 crc kubenswrapper[5010]: I0203 10:26:09.661610 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zcvn8"] Feb 03 10:26:09 crc kubenswrapper[5010]: I0203 10:26:09.818347 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:26:09 crc kubenswrapper[5010]: I0203 10:26:09.819273 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3ef87127-760d-4f81-8a78-a06d074c7ec3" containerName="glance-log" containerID="cri-o://55bbb2cde20dfdcd53e2ce462c09a9714ec6a75aaad1416462255a0ed6efb0a8" gracePeriod=30 Feb 03 10:26:09 crc kubenswrapper[5010]: I0203 10:26:09.819500 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3ef87127-760d-4f81-8a78-a06d074c7ec3" containerName="glance-httpd" containerID="cri-o://9b0678012ddc709164e9aead0d03359efde01194b4a43605e01e402b58fd05e9" gracePeriod=30 Feb 03 10:26:10 crc kubenswrapper[5010]: I0203 10:26:10.018280 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:26:10 crc kubenswrapper[5010]: I0203 10:26:10.824909 5010 generic.go:334] "Generic (PLEG): container finished" podID="2608e076-ccd5-4d9b-9739-d2815655090e" containerID="02b1b0db1e1d1490264d407bf569bd8135ae614f331340a7de745dc600379321" exitCode=0 Feb 03 10:26:10 crc kubenswrapper[5010]: I0203 10:26:10.825031 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2608e076-ccd5-4d9b-9739-d2815655090e","Type":"ContainerDied","Data":"02b1b0db1e1d1490264d407bf569bd8135ae614f331340a7de745dc600379321"} Feb 03 10:26:10 crc kubenswrapper[5010]: I0203 10:26:10.830134 5010 generic.go:334] "Generic (PLEG): container finished" podID="3ef87127-760d-4f81-8a78-a06d074c7ec3" containerID="55bbb2cde20dfdcd53e2ce462c09a9714ec6a75aaad1416462255a0ed6efb0a8" exitCode=143 Feb 03 10:26:10 crc kubenswrapper[5010]: I0203 10:26:10.830490 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zcvn8" podUID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerName="registry-server" containerID="cri-o://8340acedc9cfb7958b5ed0fad5a8c1555a0dabbb9f7998f97b867b7a3dd1d05e" gracePeriod=2 Feb 03 10:26:10 crc kubenswrapper[5010]: I0203 10:26:10.830912 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3ef87127-760d-4f81-8a78-a06d074c7ec3","Type":"ContainerDied","Data":"55bbb2cde20dfdcd53e2ce462c09a9714ec6a75aaad1416462255a0ed6efb0a8"} Feb 03 10:26:11 crc kubenswrapper[5010]: I0203 10:26:11.682452 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:26:11 crc kubenswrapper[5010]: I0203 10:26:11.683417 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8d327288-f34e-4766-b3f6-b52b5c985d7d" containerName="glance-log" containerID="cri-o://d96c848085855a1aab0bb15f4dcb25d155e8b02a76c2309a7e985e9edc63c08c" gracePeriod=30 Feb 03 10:26:11 crc kubenswrapper[5010]: I0203 10:26:11.683652 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8d327288-f34e-4766-b3f6-b52b5c985d7d" containerName="glance-httpd" containerID="cri-o://25ca14ceea3124e9ce28f484389b454fe015ddd37e62df01b7fb16db5f838f83" gracePeriod=30 Feb 03 10:26:11 crc kubenswrapper[5010]: I0203 10:26:11.853069 5010 generic.go:334] "Generic (PLEG): container finished" podID="8d327288-f34e-4766-b3f6-b52b5c985d7d" containerID="d96c848085855a1aab0bb15f4dcb25d155e8b02a76c2309a7e985e9edc63c08c" exitCode=143 Feb 03 10:26:11 crc kubenswrapper[5010]: I0203 10:26:11.853150 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8d327288-f34e-4766-b3f6-b52b5c985d7d","Type":"ContainerDied","Data":"d96c848085855a1aab0bb15f4dcb25d155e8b02a76c2309a7e985e9edc63c08c"} Feb 03 10:26:11 crc kubenswrapper[5010]: I0203 10:26:11.859418 5010 generic.go:334] "Generic (PLEG): container finished" podID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerID="8340acedc9cfb7958b5ed0fad5a8c1555a0dabbb9f7998f97b867b7a3dd1d05e" exitCode=0 Feb 03 10:26:11 crc kubenswrapper[5010]: I0203 10:26:11.859650 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zcvn8" event={"ID":"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb","Type":"ContainerDied","Data":"8340acedc9cfb7958b5ed0fad5a8c1555a0dabbb9f7998f97b867b7a3dd1d05e"} Feb 03 10:26:12 crc kubenswrapper[5010]: I0203 10:26:12.437146 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:26:12 crc kubenswrapper[5010]: I0203 10:26:12.566304 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-867995856-hbnv9"] Feb 03 10:26:12 crc kubenswrapper[5010]: I0203 10:26:12.566729 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-867995856-hbnv9" podUID="ec3f26b1-ee88-47b4-80d5-f281aa85c00d" containerName="neutron-api" containerID="cri-o://13a99ef6826ee2239f9e033be19a6f4c730512b38fb4cc1caa87b9ad6b5789db" gracePeriod=30 Feb 03 10:26:12 crc kubenswrapper[5010]: I0203 10:26:12.567679 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-867995856-hbnv9" podUID="ec3f26b1-ee88-47b4-80d5-f281aa85c00d" containerName="neutron-httpd" containerID="cri-o://61b9f09360bad3b65b22af3bd28bc767427a951a1f75a5674af55a31458394a9" gracePeriod=30 Feb 03 10:26:12 crc kubenswrapper[5010]: I0203 10:26:12.908306 5010 generic.go:334] "Generic (PLEG): container finished" podID="ec3f26b1-ee88-47b4-80d5-f281aa85c00d" containerID="61b9f09360bad3b65b22af3bd28bc767427a951a1f75a5674af55a31458394a9" exitCode=0 Feb 03 10:26:12 crc kubenswrapper[5010]: I0203 10:26:12.908384 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-867995856-hbnv9" event={"ID":"ec3f26b1-ee88-47b4-80d5-f281aa85c00d","Type":"ContainerDied","Data":"61b9f09360bad3b65b22af3bd28bc767427a951a1f75a5674af55a31458394a9"} Feb 03 10:26:13 crc kubenswrapper[5010]: I0203 10:26:13.946609 5010 generic.go:334] "Generic (PLEG): container finished" podID="2fedcc57-b16c-4177-a10e-f627269b4adb" containerID="45c56002ab101b0e77fc5934aa412e9d50c3e636af770ec4fe10888a673e7f7e" exitCode=137 Feb 03 10:26:13 crc kubenswrapper[5010]: I0203 10:26:13.947372 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc988db4-2mpfb" event={"ID":"2fedcc57-b16c-4177-a10e-f627269b4adb","Type":"ContainerDied","Data":"45c56002ab101b0e77fc5934aa412e9d50c3e636af770ec4fe10888a673e7f7e"} Feb 03 10:26:13 crc kubenswrapper[5010]: I0203 10:26:13.952191 5010 generic.go:334] "Generic (PLEG): container finished" podID="3ef87127-760d-4f81-8a78-a06d074c7ec3" containerID="9b0678012ddc709164e9aead0d03359efde01194b4a43605e01e402b58fd05e9" exitCode=0 Feb 03 10:26:13 crc kubenswrapper[5010]: I0203 10:26:13.952343 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3ef87127-760d-4f81-8a78-a06d074c7ec3","Type":"ContainerDied","Data":"9b0678012ddc709164e9aead0d03359efde01194b4a43605e01e402b58fd05e9"} Feb 03 10:26:13 crc kubenswrapper[5010]: I0203 10:26:13.964450 5010 generic.go:334] "Generic (PLEG): container finished" podID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerID="2cc2ce22d6ea86e28f6eb264d0d9c9e725b7685d6ab0fd02531064a6b9b028b0" exitCode=137 Feb 03 10:26:13 crc kubenswrapper[5010]: I0203 10:26:13.964554 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cdcd56868-k9h7g" event={"ID":"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b","Type":"ContainerDied","Data":"2cc2ce22d6ea86e28f6eb264d0d9c9e725b7685d6ab0fd02531064a6b9b028b0"} Feb 03 10:26:15 crc kubenswrapper[5010]: I0203 10:26:15.386091 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 03 10:26:15 crc kubenswrapper[5010]: I0203 10:26:15.990529 5010 generic.go:334] "Generic (PLEG): container finished" podID="8d327288-f34e-4766-b3f6-b52b5c985d7d" containerID="25ca14ceea3124e9ce28f484389b454fe015ddd37e62df01b7fb16db5f838f83" exitCode=0 Feb 03 10:26:15 crc kubenswrapper[5010]: I0203 10:26:15.990619 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8d327288-f34e-4766-b3f6-b52b5c985d7d","Type":"ContainerDied","Data":"25ca14ceea3124e9ce28f484389b454fe015ddd37e62df01b7fb16db5f838f83"} Feb 03 10:26:17 crc kubenswrapper[5010]: E0203 10:26:17.223501 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Feb 03 10:26:17 crc kubenswrapper[5010]: E0203 10:26:17.224480 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n548h65dh564h668h596h87hffh65dh559h5chbch654h5fdh64dhffh94h75hbbh79h67bh5c5h8chf4h7ch5c9h5c9h5ch588h88hb9hch648q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9nq64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(c80632c0-72bc-461d-8e87-591d0ddbc1a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:26:17 crc kubenswrapper[5010]: E0203 10:26:17.225789 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="c80632c0-72bc-461d-8e87-591d0ddbc1a8" Feb 03 10:26:17 crc kubenswrapper[5010]: I0203 10:26:17.433896 5010 scope.go:117] "RemoveContainer" containerID="e300605267e4f1076a4841165415138776a8cf13a2c4a8aef99e228176fdb314" Feb 03 10:26:17 crc kubenswrapper[5010]: I0203 10:26:17.698011 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:26:17 crc kubenswrapper[5010]: I0203 10:26:17.897155 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xt6g\" (UniqueName: \"kubernetes.io/projected/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-kube-api-access-7xt6g\") pod \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\" (UID: \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\") " Feb 03 10:26:17 crc kubenswrapper[5010]: I0203 10:26:17.897311 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-utilities\") pod \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\" (UID: \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\") " Feb 03 10:26:17 crc kubenswrapper[5010]: I0203 10:26:17.897397 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-catalog-content\") pod \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\" (UID: \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\") " Feb 03 10:26:17 crc kubenswrapper[5010]: I0203 10:26:17.900896 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-utilities" (OuterVolumeSpecName: "utilities") pod "a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" (UID: "a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:26:17 crc kubenswrapper[5010]: I0203 10:26:17.911740 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-kube-api-access-7xt6g" (OuterVolumeSpecName: "kube-api-access-7xt6g") pod "a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" (UID: "a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb"). InnerVolumeSpecName "kube-api-access-7xt6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.004582 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xt6g\" (UniqueName: \"kubernetes.io/projected/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-kube-api-access-7xt6g\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.004623 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.029921 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" (UID: "a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.070583 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="8d327288-f34e-4766-b3f6-b52b5c985d7d" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.151:9292/healthcheck\": dial tcp 10.217.0.151:9292: connect: connection refused" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.072562 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="8d327288-f34e-4766-b3f6-b52b5c985d7d" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.151:9292/healthcheck\": dial tcp 10.217.0.151:9292: connect: connection refused" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.074642 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zcvn8" event={"ID":"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb","Type":"ContainerDied","Data":"e35e681b91c0a3ba4c5e23b8c2426b406cc51121c6807c30d998f313924cb39e"} Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.074727 5010 scope.go:117] "RemoveContainer" containerID="8340acedc9cfb7958b5ed0fad5a8c1555a0dabbb9f7998f97b867b7a3dd1d05e" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.074933 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.108353 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: E0203 10:26:18.133068 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="c80632c0-72bc-461d-8e87-591d0ddbc1a8" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.208744 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zcvn8"] Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.228978 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zcvn8"] Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.245476 5010 scope.go:117] "RemoveContainer" containerID="74673c9131b0207ab10afaa2abb5a53e1aa2d49409325c6d66e87e77d3e886a6" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.512412 5010 scope.go:117] "RemoveContainer" containerID="fe0ab3a7555528e34ba8c05e18f87523a24b1e0ac976b994fc2479b4a244d8aa" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.519893 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" path="/var/lib/kubelet/pods/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb/volumes" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.529981 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.583842 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.645380 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-scripts\") pod \"3ef87127-760d-4f81-8a78-a06d074c7ec3\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.645502 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v84sf\" (UniqueName: \"kubernetes.io/projected/3ef87127-760d-4f81-8a78-a06d074c7ec3-kube-api-access-v84sf\") pod \"3ef87127-760d-4f81-8a78-a06d074c7ec3\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.645620 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-config-data-custom\") pod \"2608e076-ccd5-4d9b-9739-d2815655090e\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.645711 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ef87127-760d-4f81-8a78-a06d074c7ec3-logs\") pod \"3ef87127-760d-4f81-8a78-a06d074c7ec3\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.645822 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-combined-ca-bundle\") pod \"3ef87127-760d-4f81-8a78-a06d074c7ec3\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.645875 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-config-data\") pod \"3ef87127-760d-4f81-8a78-a06d074c7ec3\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.645896 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-combined-ca-bundle\") pod \"2608e076-ccd5-4d9b-9739-d2815655090e\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.645957 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ef87127-760d-4f81-8a78-a06d074c7ec3-httpd-run\") pod \"3ef87127-760d-4f81-8a78-a06d074c7ec3\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.646019 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrcvl\" (UniqueName: \"kubernetes.io/projected/2608e076-ccd5-4d9b-9739-d2815655090e-kube-api-access-jrcvl\") pod \"2608e076-ccd5-4d9b-9739-d2815655090e\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.646260 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-scripts\") pod \"2608e076-ccd5-4d9b-9739-d2815655090e\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.646323 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-config-data\") pod \"2608e076-ccd5-4d9b-9739-d2815655090e\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.646377 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2608e076-ccd5-4d9b-9739-d2815655090e-etc-machine-id\") pod \"2608e076-ccd5-4d9b-9739-d2815655090e\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.646399 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"3ef87127-760d-4f81-8a78-a06d074c7ec3\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.646463 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-public-tls-certs\") pod \"3ef87127-760d-4f81-8a78-a06d074c7ec3\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.652176 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ef87127-760d-4f81-8a78-a06d074c7ec3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3ef87127-760d-4f81-8a78-a06d074c7ec3" (UID: "3ef87127-760d-4f81-8a78-a06d074c7ec3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.652794 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ef87127-760d-4f81-8a78-a06d074c7ec3-logs" (OuterVolumeSpecName: "logs") pod "3ef87127-760d-4f81-8a78-a06d074c7ec3" (UID: "3ef87127-760d-4f81-8a78-a06d074c7ec3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.653794 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2608e076-ccd5-4d9b-9739-d2815655090e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2608e076-ccd5-4d9b-9739-d2815655090e" (UID: "2608e076-ccd5-4d9b-9739-d2815655090e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.690790 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-scripts" (OuterVolumeSpecName: "scripts") pod "2608e076-ccd5-4d9b-9739-d2815655090e" (UID: "2608e076-ccd5-4d9b-9739-d2815655090e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.693044 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "3ef87127-760d-4f81-8a78-a06d074c7ec3" (UID: "3ef87127-760d-4f81-8a78-a06d074c7ec3"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.697853 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2608e076-ccd5-4d9b-9739-d2815655090e" (UID: "2608e076-ccd5-4d9b-9739-d2815655090e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.700376 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-scripts" (OuterVolumeSpecName: "scripts") pod "3ef87127-760d-4f81-8a78-a06d074c7ec3" (UID: "3ef87127-760d-4f81-8a78-a06d074c7ec3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.727749 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2608e076-ccd5-4d9b-9739-d2815655090e-kube-api-access-jrcvl" (OuterVolumeSpecName: "kube-api-access-jrcvl") pod "2608e076-ccd5-4d9b-9739-d2815655090e" (UID: "2608e076-ccd5-4d9b-9739-d2815655090e"). InnerVolumeSpecName "kube-api-access-jrcvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.735566 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ef87127-760d-4f81-8a78-a06d074c7ec3-kube-api-access-v84sf" (OuterVolumeSpecName: "kube-api-access-v84sf") pod "3ef87127-760d-4f81-8a78-a06d074c7ec3" (UID: "3ef87127-760d-4f81-8a78-a06d074c7ec3"). InnerVolumeSpecName "kube-api-access-v84sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.751499 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.751563 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v84sf\" (UniqueName: \"kubernetes.io/projected/3ef87127-760d-4f81-8a78-a06d074c7ec3-kube-api-access-v84sf\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.751577 5010 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.751587 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ef87127-760d-4f81-8a78-a06d074c7ec3-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.751595 5010 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ef87127-760d-4f81-8a78-a06d074c7ec3-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.751610 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrcvl\" (UniqueName: \"kubernetes.io/projected/2608e076-ccd5-4d9b-9739-d2815655090e-kube-api-access-jrcvl\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.751620 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.751630 5010 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2608e076-ccd5-4d9b-9739-d2815655090e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.751666 5010 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.829206 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ef87127-760d-4f81-8a78-a06d074c7ec3" (UID: "3ef87127-760d-4f81-8a78-a06d074c7ec3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.860175 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.976643 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2608e076-ccd5-4d9b-9739-d2815655090e" (UID: "2608e076-ccd5-4d9b-9739-d2815655090e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.980913 5010 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.982901 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.982931 5010 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.027118 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3ef87127-760d-4f81-8a78-a06d074c7ec3" (UID: "3ef87127-760d-4f81-8a78-a06d074c7ec3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.047615 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.084537 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8d327288-f34e-4766-b3f6-b52b5c985d7d-httpd-run\") pod \"8d327288-f34e-4766-b3f6-b52b5c985d7d\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.084713 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-combined-ca-bundle\") pod \"8d327288-f34e-4766-b3f6-b52b5c985d7d\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.084832 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-scripts\") pod \"8d327288-f34e-4766-b3f6-b52b5c985d7d\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.084937 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ddcb\" (UniqueName: \"kubernetes.io/projected/8d327288-f34e-4766-b3f6-b52b5c985d7d-kube-api-access-8ddcb\") pod \"8d327288-f34e-4766-b3f6-b52b5c985d7d\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.084991 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-config-data\") pod \"8d327288-f34e-4766-b3f6-b52b5c985d7d\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.085012 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"8d327288-f34e-4766-b3f6-b52b5c985d7d\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.085105 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d327288-f34e-4766-b3f6-b52b5c985d7d-logs\") pod \"8d327288-f34e-4766-b3f6-b52b5c985d7d\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.085250 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-internal-tls-certs\") pod \"8d327288-f34e-4766-b3f6-b52b5c985d7d\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.085765 5010 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.102971 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d327288-f34e-4766-b3f6-b52b5c985d7d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8d327288-f34e-4766-b3f6-b52b5c985d7d" (UID: "8d327288-f34e-4766-b3f6-b52b5c985d7d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.114334 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d327288-f34e-4766-b3f6-b52b5c985d7d-logs" (OuterVolumeSpecName: "logs") pod "8d327288-f34e-4766-b3f6-b52b5c985d7d" (UID: "8d327288-f34e-4766-b3f6-b52b5c985d7d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.130122 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-config-data" (OuterVolumeSpecName: "config-data") pod "3ef87127-760d-4f81-8a78-a06d074c7ec3" (UID: "3ef87127-760d-4f81-8a78-a06d074c7ec3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.131247 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-config-data" (OuterVolumeSpecName: "config-data") pod "2608e076-ccd5-4d9b-9739-d2815655090e" (UID: "2608e076-ccd5-4d9b-9739-d2815655090e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.136672 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d327288-f34e-4766-b3f6-b52b5c985d7d-kube-api-access-8ddcb" (OuterVolumeSpecName: "kube-api-access-8ddcb") pod "8d327288-f34e-4766-b3f6-b52b5c985d7d" (UID: "8d327288-f34e-4766-b3f6-b52b5c985d7d"). InnerVolumeSpecName "kube-api-access-8ddcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.186713 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "8d327288-f34e-4766-b3f6-b52b5c985d7d" (UID: "8d327288-f34e-4766-b3f6-b52b5c985d7d"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.196608 5010 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8d327288-f34e-4766-b3f6-b52b5c985d7d-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.196651 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.196666 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ddcb\" (UniqueName: \"kubernetes.io/projected/8d327288-f34e-4766-b3f6-b52b5c985d7d-kube-api-access-8ddcb\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.196708 5010 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.196721 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d327288-f34e-4766-b3f6-b52b5c985d7d-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.196734 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.209448 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-scripts" (OuterVolumeSpecName: "scripts") pod "8d327288-f34e-4766-b3f6-b52b5c985d7d" (UID: "8d327288-f34e-4766-b3f6-b52b5c985d7d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.221597 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d327288-f34e-4766-b3f6-b52b5c985d7d" (UID: "8d327288-f34e-4766-b3f6-b52b5c985d7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.251785 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3ef87127-760d-4f81-8a78-a06d074c7ec3","Type":"ContainerDied","Data":"6bd4ac18ae915fc96ca9ce387172eccabbebfdb18cd09371727e5b54df8c7288"} Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.251885 5010 scope.go:117] "RemoveContainer" containerID="9b0678012ddc709164e9aead0d03359efde01194b4a43605e01e402b58fd05e9" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.252051 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.289657 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8d327288-f34e-4766-b3f6-b52b5c985d7d","Type":"ContainerDied","Data":"1764b6a93e3f3ed5e01b4b46981d2b3555284f7ada6ea1b560610775c21c68d5"} Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.289798 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.313057 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.313100 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.345446 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7594db59b7-8cg94"] Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.351051 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc988db4-2mpfb" event={"ID":"2fedcc57-b16c-4177-a10e-f627269b4adb","Type":"ContainerStarted","Data":"6fbb0922a53d8d49edbd5cf6902f7fd678c5bafcb14a6637ba51e4911560e746"} Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.361354 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2608e076-ccd5-4d9b-9739-d2815655090e","Type":"ContainerDied","Data":"8fc43be7c4e38eab87c6ce057e45c890d78c06e59c1c3f94eb288aeb3ef2742e"} Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.361533 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.424201 5010 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.428257 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4909daad-030c-436e-acf5-2405a74d8180","Type":"ContainerStarted","Data":"204ff7b5906df6362a9178ddb04b60b73173622cbd63d2c7b2264912f116e282"} Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.429029 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="ceilometer-central-agent" containerID="cri-o://4198ce459a693b38bf47283f126a3f929ce83d42492541b2b961db5cda2701f4" gracePeriod=30 Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.429543 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.429761 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="proxy-httpd" containerID="cri-o://204ff7b5906df6362a9178ddb04b60b73173622cbd63d2c7b2264912f116e282" gracePeriod=30 Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.429919 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="sg-core" containerID="cri-o://67d6ea389313e14d97c8b6c045808e3c44adad70ca29d47d5585704fabd03630" gracePeriod=30 Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.430062 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="ceilometer-notification-agent" containerID="cri-o://1bd8603024a229914190fc469345835e8b37de52fd7f1951f53bc0059a29de92" gracePeriod=30 Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.475475 5010 generic.go:334] "Generic (PLEG): container finished" podID="ec3f26b1-ee88-47b4-80d5-f281aa85c00d" containerID="13a99ef6826ee2239f9e033be19a6f4c730512b38fb4cc1caa87b9ad6b5789db" exitCode=0 Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.475566 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-867995856-hbnv9" event={"ID":"ec3f26b1-ee88-47b4-80d5-f281aa85c00d","Type":"ContainerDied","Data":"13a99ef6826ee2239f9e033be19a6f4c730512b38fb4cc1caa87b9ad6b5789db"} Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.481392 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cdcd56868-k9h7g" event={"ID":"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b","Type":"ContainerStarted","Data":"4e9bc8f0d6381cd12e012dcf3fe06eb0672b376af0b818c286309997a48dc607"} Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.516992 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.289353566 podStartE2EDuration="18.516964547s" podCreationTimestamp="2026-02-03 10:26:01 +0000 UTC" firstStartedPulling="2026-02-03 10:26:02.694767891 +0000 UTC m=+1432.850744020" lastFinishedPulling="2026-02-03 10:26:17.922378872 +0000 UTC m=+1448.078355001" observedRunningTime="2026-02-03 10:26:19.512816581 +0000 UTC m=+1449.668792730" watchObservedRunningTime="2026-02-03 10:26:19.516964547 +0000 UTC m=+1449.672940676" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.535260 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-config-data" (OuterVolumeSpecName: "config-data") pod "8d327288-f34e-4766-b3f6-b52b5c985d7d" (UID: "8d327288-f34e-4766-b3f6-b52b5c985d7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.540502 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-config-data\") pod \"8d327288-f34e-4766-b3f6-b52b5c985d7d\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.541547 5010 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: W0203 10:26:19.545188 5010 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8d327288-f34e-4766-b3f6-b52b5c985d7d/volumes/kubernetes.io~secret/config-data Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.545241 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-config-data" (OuterVolumeSpecName: "config-data") pod "8d327288-f34e-4766-b3f6-b52b5c985d7d" (UID: "8d327288-f34e-4766-b3f6-b52b5c985d7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.554629 5010 scope.go:117] "RemoveContainer" containerID="55bbb2cde20dfdcd53e2ce462c09a9714ec6a75aaad1416462255a0ed6efb0a8" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.585833 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8d327288-f34e-4766-b3f6-b52b5c985d7d" (UID: "8d327288-f34e-4766-b3f6-b52b5c985d7d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.602409 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-867995856-hbnv9" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.664967 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.665046 5010 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.708809 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.714664 5010 scope.go:117] "RemoveContainer" containerID="25ca14ceea3124e9ce28f484389b454fe015ddd37e62df01b7fb16db5f838f83" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.766457 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-httpd-config\") pod \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.766527 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-combined-ca-bundle\") pod \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.766594 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkvkc\" (UniqueName: \"kubernetes.io/projected/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-kube-api-access-mkvkc\") pod \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.766884 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-config\") pod \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.766930 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-ovndb-tls-certs\") pod \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.770372 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.782696 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.807797 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.825464 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.825927 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-kube-api-access-mkvkc" (OuterVolumeSpecName: "kube-api-access-mkvkc") pod "ec3f26b1-ee88-47b4-80d5-f281aa85c00d" (UID: "ec3f26b1-ee88-47b4-80d5-f281aa85c00d"). InnerVolumeSpecName "kube-api-access-mkvkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: E0203 10:26:19.828325 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerName="extract-content" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.829549 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerName="extract-content" Feb 03 10:26:19 crc kubenswrapper[5010]: E0203 10:26:19.829586 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec3f26b1-ee88-47b4-80d5-f281aa85c00d" containerName="neutron-api" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.829596 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3f26b1-ee88-47b4-80d5-f281aa85c00d" containerName="neutron-api" Feb 03 10:26:19 crc kubenswrapper[5010]: E0203 10:26:19.829619 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec3f26b1-ee88-47b4-80d5-f281aa85c00d" containerName="neutron-httpd" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.829631 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3f26b1-ee88-47b4-80d5-f281aa85c00d" containerName="neutron-httpd" Feb 03 10:26:19 crc kubenswrapper[5010]: E0203 10:26:19.829641 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2608e076-ccd5-4d9b-9739-d2815655090e" containerName="cinder-scheduler" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.829648 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="2608e076-ccd5-4d9b-9739-d2815655090e" containerName="cinder-scheduler" Feb 03 10:26:19 crc kubenswrapper[5010]: E0203 10:26:19.829663 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d327288-f34e-4766-b3f6-b52b5c985d7d" containerName="glance-log" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.829669 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d327288-f34e-4766-b3f6-b52b5c985d7d" containerName="glance-log" Feb 03 10:26:19 crc kubenswrapper[5010]: E0203 10:26:19.829681 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef87127-760d-4f81-8a78-a06d074c7ec3" containerName="glance-log" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.829687 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef87127-760d-4f81-8a78-a06d074c7ec3" containerName="glance-log" Feb 03 10:26:19 crc kubenswrapper[5010]: E0203 10:26:19.829697 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2608e076-ccd5-4d9b-9739-d2815655090e" containerName="probe" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.829704 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="2608e076-ccd5-4d9b-9739-d2815655090e" containerName="probe" Feb 03 10:26:19 crc kubenswrapper[5010]: E0203 10:26:19.829737 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerName="registry-server" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.829745 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerName="registry-server" Feb 03 10:26:19 crc kubenswrapper[5010]: E0203 10:26:19.829755 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerName="extract-utilities" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.829763 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerName="extract-utilities" Feb 03 10:26:19 crc kubenswrapper[5010]: E0203 10:26:19.829777 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d327288-f34e-4766-b3f6-b52b5c985d7d" containerName="glance-httpd" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.829785 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d327288-f34e-4766-b3f6-b52b5c985d7d" containerName="glance-httpd" Feb 03 10:26:19 crc kubenswrapper[5010]: E0203 10:26:19.829803 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef87127-760d-4f81-8a78-a06d074c7ec3" containerName="glance-httpd" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.829810 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef87127-760d-4f81-8a78-a06d074c7ec3" containerName="glance-httpd" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.830157 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef87127-760d-4f81-8a78-a06d074c7ec3" containerName="glance-log" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.830183 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="2608e076-ccd5-4d9b-9739-d2815655090e" containerName="cinder-scheduler" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.830197 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef87127-760d-4f81-8a78-a06d074c7ec3" containerName="glance-httpd" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.830237 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d327288-f34e-4766-b3f6-b52b5c985d7d" containerName="glance-log" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.830257 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="2608e076-ccd5-4d9b-9739-d2815655090e" containerName="probe" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.830269 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec3f26b1-ee88-47b4-80d5-f281aa85c00d" containerName="neutron-api" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.830301 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d327288-f34e-4766-b3f6-b52b5c985d7d" containerName="glance-httpd" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.830317 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec3f26b1-ee88-47b4-80d5-f281aa85c00d" containerName="neutron-httpd" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.830332 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerName="registry-server" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.831401 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ec3f26b1-ee88-47b4-80d5-f281aa85c00d" (UID: "ec3f26b1-ee88-47b4-80d5-f281aa85c00d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.831787 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.849065 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.854519 5010 scope.go:117] "RemoveContainer" containerID="d96c848085855a1aab0bb15f4dcb25d155e8b02a76c2309a7e985e9edc63c08c" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.870421 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.881183 5010 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.881258 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkvkc\" (UniqueName: \"kubernetes.io/projected/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-kube-api-access-mkvkc\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.949573 5010 scope.go:117] "RemoveContainer" containerID="9afac37147605919491f382bbfc27637b26db8fa47e1eb9f1d9454af8578414f" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.963163 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.968801 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.972045 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mtbjz" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.972397 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.974954 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.974981 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.979092 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-config" (OuterVolumeSpecName: "config") pod "ec3f26b1-ee88-47b4-80d5-f281aa85c00d" (UID: "ec3f26b1-ee88-47b4-80d5-f281aa85c00d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.983102 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dxll\" (UniqueName: \"kubernetes.io/projected/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-kube-api-access-9dxll\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.983205 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-config-data\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.983303 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.983342 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.983381 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-scripts\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.983400 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.983870 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.984029 5010 scope.go:117] "RemoveContainer" containerID="02b1b0db1e1d1490264d407bf569bd8135ae614f331340a7de745dc600379321" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.007270 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec3f26b1-ee88-47b4-80d5-f281aa85c00d" (UID: "ec3f26b1-ee88-47b4-80d5-f281aa85c00d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.035569 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.056645 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.078136 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.104165 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.107448 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1769cccf-496c-4370-8e08-e1f156fecd77-scripts\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.107580 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.107736 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1769cccf-496c-4370-8e08-e1f156fecd77-logs\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.107817 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-scripts\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.107861 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.107895 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1769cccf-496c-4370-8e08-e1f156fecd77-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.107933 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.108122 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1769cccf-496c-4370-8e08-e1f156fecd77-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.108290 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1769cccf-496c-4370-8e08-e1f156fecd77-config-data\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.108744 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.109062 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dxll\" (UniqueName: \"kubernetes.io/projected/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-kube-api-access-9dxll\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.109273 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db6fd\" (UniqueName: \"kubernetes.io/projected/1769cccf-496c-4370-8e08-e1f156fecd77-kube-api-access-db6fd\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.109517 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-config-data\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.109653 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1769cccf-496c-4370-8e08-e1f156fecd77-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.110891 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.113915 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.125184 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.125406 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-config-data\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.125832 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-scripts\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.126749 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.128192 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.133924 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.135392 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.143073 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.144805 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dxll\" (UniqueName: \"kubernetes.io/projected/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-kube-api-access-9dxll\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.168180 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ec3f26b1-ee88-47b4-80d5-f281aa85c00d" (UID: "ec3f26b1-ee88-47b4-80d5-f281aa85c00d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.213231 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.213627 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.213737 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.213881 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1769cccf-496c-4370-8e08-e1f156fecd77-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.213984 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.214086 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.214235 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-logs\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.214409 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1769cccf-496c-4370-8e08-e1f156fecd77-scripts\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.216176 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1769cccf-496c-4370-8e08-e1f156fecd77-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.216380 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8d6z\" (UniqueName: \"kubernetes.io/projected/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-kube-api-access-m8d6z\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.216615 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.216693 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1769cccf-496c-4370-8e08-e1f156fecd77-logs\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.216854 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1769cccf-496c-4370-8e08-e1f156fecd77-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.216918 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.217082 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1769cccf-496c-4370-8e08-e1f156fecd77-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.217203 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1769cccf-496c-4370-8e08-e1f156fecd77-config-data\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.217431 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db6fd\" (UniqueName: \"kubernetes.io/projected/1769cccf-496c-4370-8e08-e1f156fecd77-kube-api-access-db6fd\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.218399 5010 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.223129 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1769cccf-496c-4370-8e08-e1f156fecd77-config-data\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.223247 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.223917 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1769cccf-496c-4370-8e08-e1f156fecd77-logs\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.228809 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1769cccf-496c-4370-8e08-e1f156fecd77-scripts\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.229280 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1769cccf-496c-4370-8e08-e1f156fecd77-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.233045 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1769cccf-496c-4370-8e08-e1f156fecd77-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.241664 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.242632 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db6fd\" (UniqueName: \"kubernetes.io/projected/1769cccf-496c-4370-8e08-e1f156fecd77-kube-api-access-db6fd\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.282350 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.320383 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.320716 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.321000 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.321371 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.321474 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.321534 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.321638 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-logs\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.321787 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8d6z\" (UniqueName: \"kubernetes.io/projected/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-kube-api-access-m8d6z\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.321878 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.323081 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-logs\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.324986 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.327211 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.329471 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.348014 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.354879 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.360107 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8d6z\" (UniqueName: \"kubernetes.io/projected/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-kube-api-access-m8d6z\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.360945 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.417896 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.467081 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.595063 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2608e076-ccd5-4d9b-9739-d2815655090e" path="/var/lib/kubelet/pods/2608e076-ccd5-4d9b-9739-d2815655090e/volumes" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.596173 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ef87127-760d-4f81-8a78-a06d074c7ec3" path="/var/lib/kubelet/pods/3ef87127-760d-4f81-8a78-a06d074c7ec3/volumes" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.602305 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d327288-f34e-4766-b3f6-b52b5c985d7d" path="/var/lib/kubelet/pods/8d327288-f34e-4766-b3f6-b52b5c985d7d/volumes" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.619252 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7594db59b7-8cg94" event={"ID":"a0d01af0-abb7-4cd1-92d7-d741182948f9","Type":"ContainerStarted","Data":"d650c86cc6764932add9e9703768e8c9d50ba847abea4a0b062a2d92d6a9e49d"} Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.619370 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7594db59b7-8cg94" event={"ID":"a0d01af0-abb7-4cd1-92d7-d741182948f9","Type":"ContainerStarted","Data":"3bb214043f133be975e271904ac4313246c72c1065478f5fc497fe7508412cbf"} Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.690640 5010 generic.go:334] "Generic (PLEG): container finished" podID="4909daad-030c-436e-acf5-2405a74d8180" containerID="67d6ea389313e14d97c8b6c045808e3c44adad70ca29d47d5585704fabd03630" exitCode=2 Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.691190 5010 generic.go:334] "Generic (PLEG): container finished" podID="4909daad-030c-436e-acf5-2405a74d8180" containerID="4198ce459a693b38bf47283f126a3f929ce83d42492541b2b961db5cda2701f4" exitCode=0 Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.691323 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4909daad-030c-436e-acf5-2405a74d8180","Type":"ContainerDied","Data":"67d6ea389313e14d97c8b6c045808e3c44adad70ca29d47d5585704fabd03630"} Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.691367 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4909daad-030c-436e-acf5-2405a74d8180","Type":"ContainerDied","Data":"4198ce459a693b38bf47283f126a3f929ce83d42492541b2b961db5cda2701f4"} Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.703749 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-867995856-hbnv9" event={"ID":"ec3f26b1-ee88-47b4-80d5-f281aa85c00d","Type":"ContainerDied","Data":"5d57a17f6b627eededa0a21aa0ef2051ab13fadb63e9a5ef111d5cb1f8d96193"} Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.703851 5010 scope.go:117] "RemoveContainer" containerID="61b9f09360bad3b65b22af3bd28bc767427a951a1f75a5674af55a31458394a9" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.704118 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-867995856-hbnv9" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.774659 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-867995856-hbnv9"] Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.776817 5010 scope.go:117] "RemoveContainer" containerID="13a99ef6826ee2239f9e033be19a6f4c730512b38fb4cc1caa87b9ad6b5789db" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.810753 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-867995856-hbnv9"] Feb 03 10:26:21 crc kubenswrapper[5010]: I0203 10:26:21.012592 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 10:26:21 crc kubenswrapper[5010]: I0203 10:26:21.472438 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:26:21 crc kubenswrapper[5010]: W0203 10:26:21.485605 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9df7182f_e3e9_40bf_bfb2_b2e9ef64f90a.slice/crio-ee33cff183c92ef1b70e5e208d817c51e9ad6b2607a4d19849d3d342e041a4cc WatchSource:0}: Error finding container ee33cff183c92ef1b70e5e208d817c51e9ad6b2607a4d19849d3d342e041a4cc: Status 404 returned error can't find the container with id ee33cff183c92ef1b70e5e208d817c51e9ad6b2607a4d19849d3d342e041a4cc Feb 03 10:26:21 crc kubenswrapper[5010]: I0203 10:26:21.720257 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a","Type":"ContainerStarted","Data":"ee33cff183c92ef1b70e5e208d817c51e9ad6b2607a4d19849d3d342e041a4cc"} Feb 03 10:26:21 crc kubenswrapper[5010]: I0203 10:26:21.724280 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"63ed8c2d-6ac3-4a61-8e4c-1601efeca708","Type":"ContainerStarted","Data":"5094e83e6ce9d9199193fbd6c30a37df43729ffbc7fdc7fa8d97620d280876e4"} Feb 03 10:26:21 crc kubenswrapper[5010]: I0203 10:26:21.738978 5010 generic.go:334] "Generic (PLEG): container finished" podID="4909daad-030c-436e-acf5-2405a74d8180" containerID="1bd8603024a229914190fc469345835e8b37de52fd7f1951f53bc0059a29de92" exitCode=0 Feb 03 10:26:21 crc kubenswrapper[5010]: I0203 10:26:21.739048 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4909daad-030c-436e-acf5-2405a74d8180","Type":"ContainerDied","Data":"1bd8603024a229914190fc469345835e8b37de52fd7f1951f53bc0059a29de92"} Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.219705 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-qnsrk"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.236041 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qnsrk" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.284740 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qnsrk"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.416867 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-d58b-account-create-update-p69h5"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.419565 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d58b-account-create-update-p69h5" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.432082 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c-operator-scripts\") pod \"nova-api-db-create-qnsrk\" (UID: \"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c\") " pod="openstack/nova-api-db-create-qnsrk" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.432634 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4nh7\" (UniqueName: \"kubernetes.io/projected/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c-kube-api-access-j4nh7\") pod \"nova-api-db-create-qnsrk\" (UID: \"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c\") " pod="openstack/nova-api-db-create-qnsrk" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.432826 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.600852 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8h8k\" (UniqueName: \"kubernetes.io/projected/122231ac-5000-44d7-a524-2df85da0abd4-kube-api-access-r8h8k\") pod \"nova-api-d58b-account-create-update-p69h5\" (UID: \"122231ac-5000-44d7-a524-2df85da0abd4\") " pod="openstack/nova-api-d58b-account-create-update-p69h5" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.600960 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4nh7\" (UniqueName: \"kubernetes.io/projected/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c-kube-api-access-j4nh7\") pod \"nova-api-db-create-qnsrk\" (UID: \"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c\") " pod="openstack/nova-api-db-create-qnsrk" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.601174 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c-operator-scripts\") pod \"nova-api-db-create-qnsrk\" (UID: \"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c\") " pod="openstack/nova-api-db-create-qnsrk" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.605147 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/122231ac-5000-44d7-a524-2df85da0abd4-operator-scripts\") pod \"nova-api-d58b-account-create-update-p69h5\" (UID: \"122231ac-5000-44d7-a524-2df85da0abd4\") " pod="openstack/nova-api-d58b-account-create-update-p69h5" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.610054 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c-operator-scripts\") pod \"nova-api-db-create-qnsrk\" (UID: \"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c\") " pod="openstack/nova-api-db-create-qnsrk" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.675278 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec3f26b1-ee88-47b4-80d5-f281aa85c00d" path="/var/lib/kubelet/pods/ec3f26b1-ee88-47b4-80d5-f281aa85c00d/volumes" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.676870 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d58b-account-create-update-p69h5"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.676907 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.676924 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-dq6kw"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.681736 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dq6kw" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.709519 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dq6kw"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.713315 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4nh7\" (UniqueName: \"kubernetes.io/projected/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c-kube-api-access-j4nh7\") pod \"nova-api-db-create-qnsrk\" (UID: \"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c\") " pod="openstack/nova-api-db-create-qnsrk" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.746199 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/122231ac-5000-44d7-a524-2df85da0abd4-operator-scripts\") pod \"nova-api-d58b-account-create-update-p69h5\" (UID: \"122231ac-5000-44d7-a524-2df85da0abd4\") " pod="openstack/nova-api-d58b-account-create-update-p69h5" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.746645 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8h8k\" (UniqueName: \"kubernetes.io/projected/122231ac-5000-44d7-a524-2df85da0abd4-kube-api-access-r8h8k\") pod \"nova-api-d58b-account-create-update-p69h5\" (UID: \"122231ac-5000-44d7-a524-2df85da0abd4\") " pod="openstack/nova-api-d58b-account-create-update-p69h5" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.750120 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/122231ac-5000-44d7-a524-2df85da0abd4-operator-scripts\") pod \"nova-api-d58b-account-create-update-p69h5\" (UID: \"122231ac-5000-44d7-a524-2df85da0abd4\") " pod="openstack/nova-api-d58b-account-create-update-p69h5" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.777293 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8h8k\" (UniqueName: \"kubernetes.io/projected/122231ac-5000-44d7-a524-2df85da0abd4-kube-api-access-r8h8k\") pod \"nova-api-d58b-account-create-update-p69h5\" (UID: \"122231ac-5000-44d7-a524-2df85da0abd4\") " pod="openstack/nova-api-d58b-account-create-update-p69h5" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.786680 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-fztcs"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.792147 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fztcs" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.797114 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fztcs"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.810416 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.811833 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.825015 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-46aa-account-create-update-5gs9h"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.837891 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.845738 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.855012 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/307672c5-ae66-4af2-bbbb-1a59c58ee4b2-operator-scripts\") pod \"nova-cell0-db-create-dq6kw\" (UID: \"307672c5-ae66-4af2-bbbb-1a59c58ee4b2\") " pod="openstack/nova-cell0-db-create-dq6kw" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.860180 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7s8r\" (UniqueName: \"kubernetes.io/projected/307672c5-ae66-4af2-bbbb-1a59c58ee4b2-kube-api-access-z7s8r\") pod \"nova-cell0-db-create-dq6kw\" (UID: \"307672c5-ae66-4af2-bbbb-1a59c58ee4b2\") " pod="openstack/nova-cell0-db-create-dq6kw" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.872709 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"63ed8c2d-6ac3-4a61-8e4c-1601efeca708","Type":"ContainerStarted","Data":"608458075b6b49913240654df17472092c0c9c4149bbc8fea5e0d935492ce955"} Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.878455 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-46aa-account-create-update-5gs9h"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.886645 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1769cccf-496c-4370-8e08-e1f156fecd77","Type":"ContainerStarted","Data":"5fcc8ee5ae1d4704603c864a8158576315906c583ebe1fb70d9c31068cab8a7d"} Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.887419 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qnsrk" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.921062 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7594db59b7-8cg94" event={"ID":"a0d01af0-abb7-4cd1-92d7-d741182948f9","Type":"ContainerStarted","Data":"df73bc00fb7fe066fcb6d82f9ec7d7342ce26208e27603844392cda655acb073"} Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.922124 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.922191 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.926910 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-c6bf-account-create-update-9xrwr"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.930197 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.933801 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.938977 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c6bf-account-create-update-9xrwr"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.964449 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fac5d19-4577-4190-b626-83d0b42fd46d-operator-scripts\") pod \"nova-cell0-46aa-account-create-update-5gs9h\" (UID: \"6fac5d19-4577-4190-b626-83d0b42fd46d\") " pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.964604 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/307672c5-ae66-4af2-bbbb-1a59c58ee4b2-operator-scripts\") pod \"nova-cell0-db-create-dq6kw\" (UID: \"307672c5-ae66-4af2-bbbb-1a59c58ee4b2\") " pod="openstack/nova-cell0-db-create-dq6kw" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.964628 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19aa5f54-6733-454e-a1cf-92ba62fc4068-operator-scripts\") pod \"nova-cell1-db-create-fztcs\" (UID: \"19aa5f54-6733-454e-a1cf-92ba62fc4068\") " pod="openstack/nova-cell1-db-create-fztcs" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.964683 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dcr7\" (UniqueName: \"kubernetes.io/projected/19aa5f54-6733-454e-a1cf-92ba62fc4068-kube-api-access-6dcr7\") pod \"nova-cell1-db-create-fztcs\" (UID: \"19aa5f54-6733-454e-a1cf-92ba62fc4068\") " pod="openstack/nova-cell1-db-create-fztcs" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.964797 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7s8r\" (UniqueName: \"kubernetes.io/projected/307672c5-ae66-4af2-bbbb-1a59c58ee4b2-kube-api-access-z7s8r\") pod \"nova-cell0-db-create-dq6kw\" (UID: \"307672c5-ae66-4af2-bbbb-1a59c58ee4b2\") " pod="openstack/nova-cell0-db-create-dq6kw" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.964850 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khj9c\" (UniqueName: \"kubernetes.io/projected/6fac5d19-4577-4190-b626-83d0b42fd46d-kube-api-access-khj9c\") pod \"nova-cell0-46aa-account-create-update-5gs9h\" (UID: \"6fac5d19-4577-4190-b626-83d0b42fd46d\") " pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.968614 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/307672c5-ae66-4af2-bbbb-1a59c58ee4b2-operator-scripts\") pod \"nova-cell0-db-create-dq6kw\" (UID: \"307672c5-ae66-4af2-bbbb-1a59c58ee4b2\") " pod="openstack/nova-cell0-db-create-dq6kw" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.979873 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7594db59b7-8cg94" podStartSLOduration=15.979838824 podStartE2EDuration="15.979838824s" podCreationTimestamp="2026-02-03 10:26:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:26:22.952144573 +0000 UTC m=+1453.108120722" watchObservedRunningTime="2026-02-03 10:26:22.979838824 +0000 UTC m=+1453.135814953" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.016825 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7s8r\" (UniqueName: \"kubernetes.io/projected/307672c5-ae66-4af2-bbbb-1a59c58ee4b2-kube-api-access-z7s8r\") pod \"nova-cell0-db-create-dq6kw\" (UID: \"307672c5-ae66-4af2-bbbb-1a59c58ee4b2\") " pod="openstack/nova-cell0-db-create-dq6kw" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.079229 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d58b-account-create-update-p69h5" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.080162 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cab88b93-9009-49d9-8967-dc8f2b9a7244-operator-scripts\") pod \"nova-cell1-c6bf-account-create-update-9xrwr\" (UID: \"cab88b93-9009-49d9-8967-dc8f2b9a7244\") " pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.080536 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khj9c\" (UniqueName: \"kubernetes.io/projected/6fac5d19-4577-4190-b626-83d0b42fd46d-kube-api-access-khj9c\") pod \"nova-cell0-46aa-account-create-update-5gs9h\" (UID: \"6fac5d19-4577-4190-b626-83d0b42fd46d\") " pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.080618 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chd7x\" (UniqueName: \"kubernetes.io/projected/cab88b93-9009-49d9-8967-dc8f2b9a7244-kube-api-access-chd7x\") pod \"nova-cell1-c6bf-account-create-update-9xrwr\" (UID: \"cab88b93-9009-49d9-8967-dc8f2b9a7244\") " pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.080692 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fac5d19-4577-4190-b626-83d0b42fd46d-operator-scripts\") pod \"nova-cell0-46aa-account-create-update-5gs9h\" (UID: \"6fac5d19-4577-4190-b626-83d0b42fd46d\") " pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.080866 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19aa5f54-6733-454e-a1cf-92ba62fc4068-operator-scripts\") pod \"nova-cell1-db-create-fztcs\" (UID: \"19aa5f54-6733-454e-a1cf-92ba62fc4068\") " pod="openstack/nova-cell1-db-create-fztcs" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.080937 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dcr7\" (UniqueName: \"kubernetes.io/projected/19aa5f54-6733-454e-a1cf-92ba62fc4068-kube-api-access-6dcr7\") pod \"nova-cell1-db-create-fztcs\" (UID: \"19aa5f54-6733-454e-a1cf-92ba62fc4068\") " pod="openstack/nova-cell1-db-create-fztcs" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.083429 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19aa5f54-6733-454e-a1cf-92ba62fc4068-operator-scripts\") pod \"nova-cell1-db-create-fztcs\" (UID: \"19aa5f54-6733-454e-a1cf-92ba62fc4068\") " pod="openstack/nova-cell1-db-create-fztcs" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.084430 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fac5d19-4577-4190-b626-83d0b42fd46d-operator-scripts\") pod \"nova-cell0-46aa-account-create-update-5gs9h\" (UID: \"6fac5d19-4577-4190-b626-83d0b42fd46d\") " pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.111193 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dcr7\" (UniqueName: \"kubernetes.io/projected/19aa5f54-6733-454e-a1cf-92ba62fc4068-kube-api-access-6dcr7\") pod \"nova-cell1-db-create-fztcs\" (UID: \"19aa5f54-6733-454e-a1cf-92ba62fc4068\") " pod="openstack/nova-cell1-db-create-fztcs" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.114158 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khj9c\" (UniqueName: \"kubernetes.io/projected/6fac5d19-4577-4190-b626-83d0b42fd46d-kube-api-access-khj9c\") pod \"nova-cell0-46aa-account-create-update-5gs9h\" (UID: \"6fac5d19-4577-4190-b626-83d0b42fd46d\") " pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.124546 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.125403 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.196300 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cab88b93-9009-49d9-8967-dc8f2b9a7244-operator-scripts\") pod \"nova-cell1-c6bf-account-create-update-9xrwr\" (UID: \"cab88b93-9009-49d9-8967-dc8f2b9a7244\") " pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.196906 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chd7x\" (UniqueName: \"kubernetes.io/projected/cab88b93-9009-49d9-8967-dc8f2b9a7244-kube-api-access-chd7x\") pod \"nova-cell1-c6bf-account-create-update-9xrwr\" (UID: \"cab88b93-9009-49d9-8967-dc8f2b9a7244\") " pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.203229 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cab88b93-9009-49d9-8967-dc8f2b9a7244-operator-scripts\") pod \"nova-cell1-c6bf-account-create-update-9xrwr\" (UID: \"cab88b93-9009-49d9-8967-dc8f2b9a7244\") " pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.230516 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dq6kw" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.243453 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chd7x\" (UniqueName: \"kubernetes.io/projected/cab88b93-9009-49d9-8967-dc8f2b9a7244-kube-api-access-chd7x\") pod \"nova-cell1-c6bf-account-create-update-9xrwr\" (UID: \"cab88b93-9009-49d9-8967-dc8f2b9a7244\") " pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.251139 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fztcs" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.391075 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.422867 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.800998 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qnsrk"] Feb 03 10:26:23 crc kubenswrapper[5010]: W0203 10:26:23.847784 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26fff59b_fc6c_46b2_9cb6_9ad352b4e39c.slice/crio-0ba4d23b4eba6d6e0c64a591720369c91d163b40cc0f86e50be5facff204aee1 WatchSource:0}: Error finding container 0ba4d23b4eba6d6e0c64a591720369c91d163b40cc0f86e50be5facff204aee1: Status 404 returned error can't find the container with id 0ba4d23b4eba6d6e0c64a591720369c91d163b40cc0f86e50be5facff204aee1 Feb 03 10:26:24 crc kubenswrapper[5010]: I0203 10:26:24.000675 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qnsrk" event={"ID":"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c","Type":"ContainerStarted","Data":"0ba4d23b4eba6d6e0c64a591720369c91d163b40cc0f86e50be5facff204aee1"} Feb 03 10:26:24 crc kubenswrapper[5010]: I0203 10:26:24.022655 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a","Type":"ContainerStarted","Data":"bb831f8968e76e6ea1b5107a74598ccfd811b313307026199e8086e291b6b925"} Feb 03 10:26:24 crc kubenswrapper[5010]: I0203 10:26:24.090527 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-7594db59b7-8cg94" podUID="a0d01af0-abb7-4cd1-92d7-d741182948f9" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 03 10:26:24 crc kubenswrapper[5010]: I0203 10:26:24.487575 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d58b-account-create-update-p69h5"] Feb 03 10:26:24 crc kubenswrapper[5010]: I0203 10:26:24.696809 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dq6kw"] Feb 03 10:26:24 crc kubenswrapper[5010]: W0203 10:26:24.745282 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod307672c5_ae66_4af2_bbbb_1a59c58ee4b2.slice/crio-eee96a285511460543183b4fa28b6245bf21bbbd910269f1be813ccaf8a85b09 WatchSource:0}: Error finding container eee96a285511460543183b4fa28b6245bf21bbbd910269f1be813ccaf8a85b09: Status 404 returned error can't find the container with id eee96a285511460543183b4fa28b6245bf21bbbd910269f1be813ccaf8a85b09 Feb 03 10:26:25 crc kubenswrapper[5010]: I0203 10:26:25.086874 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d58b-account-create-update-p69h5" event={"ID":"122231ac-5000-44d7-a524-2df85da0abd4","Type":"ContainerStarted","Data":"d7c37356251ceab45787255a0bf11e8d0fb8d799a100064f75c795e909a8b233"} Feb 03 10:26:25 crc kubenswrapper[5010]: I0203 10:26:25.102419 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"63ed8c2d-6ac3-4a61-8e4c-1601efeca708","Type":"ContainerStarted","Data":"ebf13c00c4aecf2f4b7ce83a689427d37c49b125515ab68b9a2ecbbc3500216e"} Feb 03 10:26:25 crc kubenswrapper[5010]: I0203 10:26:25.140691 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qnsrk" event={"ID":"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c","Type":"ContainerStarted","Data":"a966998f1e0d5c656c412830d78b6e892d7c7c270d9300eb5f417be99b11fe63"} Feb 03 10:26:25 crc kubenswrapper[5010]: I0203 10:26:25.159603 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dq6kw" event={"ID":"307672c5-ae66-4af2-bbbb-1a59c58ee4b2","Type":"ContainerStarted","Data":"eee96a285511460543183b4fa28b6245bf21bbbd910269f1be813ccaf8a85b09"} Feb 03 10:26:25 crc kubenswrapper[5010]: I0203 10:26:25.171691 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.171640793 podStartE2EDuration="6.171640793s" podCreationTimestamp="2026-02-03 10:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:26:25.14272235 +0000 UTC m=+1455.298698479" watchObservedRunningTime="2026-02-03 10:26:25.171640793 +0000 UTC m=+1455.327616932" Feb 03 10:26:25 crc kubenswrapper[5010]: I0203 10:26:25.217884 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-qnsrk" podStartSLOduration=3.217852809 podStartE2EDuration="3.217852809s" podCreationTimestamp="2026-02-03 10:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:26:25.177080232 +0000 UTC m=+1455.333056361" watchObservedRunningTime="2026-02-03 10:26:25.217852809 +0000 UTC m=+1455.373828938" Feb 03 10:26:25 crc kubenswrapper[5010]: I0203 10:26:25.243441 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 03 10:26:25 crc kubenswrapper[5010]: I0203 10:26:25.292175 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fztcs"] Feb 03 10:26:25 crc kubenswrapper[5010]: I0203 10:26:25.305652 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-46aa-account-create-update-5gs9h"] Feb 03 10:26:25 crc kubenswrapper[5010]: I0203 10:26:25.381976 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c6bf-account-create-update-9xrwr"] Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.208260 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" event={"ID":"6fac5d19-4577-4190-b626-83d0b42fd46d","Type":"ContainerStarted","Data":"48902a83c43af8a62b4d6b968a8b3ca68e0101eb2b41fc6cd1fdf99dd7be0466"} Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.209241 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" event={"ID":"6fac5d19-4577-4190-b626-83d0b42fd46d","Type":"ContainerStarted","Data":"e6e83b9fa88b18c5bf71a71896def34f7759be48f196cbee117b1b6d7fc1256f"} Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.241060 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d58b-account-create-update-p69h5" event={"ID":"122231ac-5000-44d7-a524-2df85da0abd4","Type":"ContainerStarted","Data":"481559434a2d42e2a028cba399231b55666506a6320e8ddbe78f4de71650ba33"} Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.244884 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" podStartSLOduration=4.24484745 podStartE2EDuration="4.24484745s" podCreationTimestamp="2026-02-03 10:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:26:26.241017162 +0000 UTC m=+1456.396993291" watchObservedRunningTime="2026-02-03 10:26:26.24484745 +0000 UTC m=+1456.400823579" Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.279381 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fztcs" event={"ID":"19aa5f54-6733-454e-a1cf-92ba62fc4068","Type":"ContainerStarted","Data":"277036577a9bb8f26bb26efd4d33210a114ebacd0ae43e4abbbdfbe425f61dd5"} Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.279454 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fztcs" event={"ID":"19aa5f54-6733-454e-a1cf-92ba62fc4068","Type":"ContainerStarted","Data":"49558af84c27fd529f7f93b79b04100ed86805e41a3a8207cb74e5891388348f"} Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.296058 5010 generic.go:334] "Generic (PLEG): container finished" podID="26fff59b-fc6c-46b2-9cb6-9ad352b4e39c" containerID="a966998f1e0d5c656c412830d78b6e892d7c7c270d9300eb5f417be99b11fe63" exitCode=0 Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.296160 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qnsrk" event={"ID":"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c","Type":"ContainerDied","Data":"a966998f1e0d5c656c412830d78b6e892d7c7c270d9300eb5f417be99b11fe63"} Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.314743 5010 generic.go:334] "Generic (PLEG): container finished" podID="307672c5-ae66-4af2-bbbb-1a59c58ee4b2" containerID="4927cc4be235478029139ce32f036f214b152852871af562859aac3f62d37796" exitCode=0 Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.314887 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dq6kw" event={"ID":"307672c5-ae66-4af2-bbbb-1a59c58ee4b2","Type":"ContainerDied","Data":"4927cc4be235478029139ce32f036f214b152852871af562859aac3f62d37796"} Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.322817 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" event={"ID":"cab88b93-9009-49d9-8967-dc8f2b9a7244","Type":"ContainerStarted","Data":"279c8b5f461c06f3191fbc6bb211d5d862c782efbbff978992257a86dd9152d3"} Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.322892 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" event={"ID":"cab88b93-9009-49d9-8967-dc8f2b9a7244","Type":"ContainerStarted","Data":"0a2358b435e4d2a2f42ee4e3e8fcbdc8cf21cbb007e9e788e0e3ad868a511b80"} Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.327311 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1769cccf-496c-4370-8e08-e1f156fecd77","Type":"ContainerStarted","Data":"2d8db287b9e462878af4470363facdc91935bbf327f4082fd0e4728ee3cb2035"} Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.340758 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-fztcs" podStartSLOduration=4.340706651 podStartE2EDuration="4.340706651s" podCreationTimestamp="2026-02-03 10:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:26:26.310476375 +0000 UTC m=+1456.466452494" watchObservedRunningTime="2026-02-03 10:26:26.340706651 +0000 UTC m=+1456.496682780" Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.342246 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a","Type":"ContainerStarted","Data":"b8cb775a7d77ea587bacf09d59466092c4bfc800e46073c399dc94d5fa42b79e"} Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.413727 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" podStartSLOduration=4.413693455 podStartE2EDuration="4.413693455s" podCreationTimestamp="2026-02-03 10:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:26:26.365919599 +0000 UTC m=+1456.521895748" watchObservedRunningTime="2026-02-03 10:26:26.413693455 +0000 UTC m=+1456.569669584" Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.537126 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.537084694 podStartE2EDuration="7.537084694s" podCreationTimestamp="2026-02-03 10:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:26:26.448480899 +0000 UTC m=+1456.604457038" watchObservedRunningTime="2026-02-03 10:26:26.537084694 +0000 UTC m=+1456.693060833" Feb 03 10:26:27 crc kubenswrapper[5010]: I0203 10:26:27.354371 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1769cccf-496c-4370-8e08-e1f156fecd77","Type":"ContainerStarted","Data":"246c1e7b3ea1f8cbcc196edcc1361a663ce9cf422d064143d09c6bc10719e9b2"} Feb 03 10:26:27 crc kubenswrapper[5010]: I0203 10:26:27.358847 5010 generic.go:334] "Generic (PLEG): container finished" podID="6fac5d19-4577-4190-b626-83d0b42fd46d" containerID="48902a83c43af8a62b4d6b968a8b3ca68e0101eb2b41fc6cd1fdf99dd7be0466" exitCode=0 Feb 03 10:26:27 crc kubenswrapper[5010]: I0203 10:26:27.358970 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" event={"ID":"6fac5d19-4577-4190-b626-83d0b42fd46d","Type":"ContainerDied","Data":"48902a83c43af8a62b4d6b968a8b3ca68e0101eb2b41fc6cd1fdf99dd7be0466"} Feb 03 10:26:27 crc kubenswrapper[5010]: I0203 10:26:27.364900 5010 generic.go:334] "Generic (PLEG): container finished" podID="122231ac-5000-44d7-a524-2df85da0abd4" containerID="481559434a2d42e2a028cba399231b55666506a6320e8ddbe78f4de71650ba33" exitCode=0 Feb 03 10:26:27 crc kubenswrapper[5010]: I0203 10:26:27.365012 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d58b-account-create-update-p69h5" event={"ID":"122231ac-5000-44d7-a524-2df85da0abd4","Type":"ContainerDied","Data":"481559434a2d42e2a028cba399231b55666506a6320e8ddbe78f4de71650ba33"} Feb 03 10:26:27 crc kubenswrapper[5010]: I0203 10:26:27.367243 5010 generic.go:334] "Generic (PLEG): container finished" podID="19aa5f54-6733-454e-a1cf-92ba62fc4068" containerID="277036577a9bb8f26bb26efd4d33210a114ebacd0ae43e4abbbdfbe425f61dd5" exitCode=0 Feb 03 10:26:27 crc kubenswrapper[5010]: I0203 10:26:27.367335 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fztcs" event={"ID":"19aa5f54-6733-454e-a1cf-92ba62fc4068","Type":"ContainerDied","Data":"277036577a9bb8f26bb26efd4d33210a114ebacd0ae43e4abbbdfbe425f61dd5"} Feb 03 10:26:27 crc kubenswrapper[5010]: I0203 10:26:27.369878 5010 generic.go:334] "Generic (PLEG): container finished" podID="cab88b93-9009-49d9-8967-dc8f2b9a7244" containerID="279c8b5f461c06f3191fbc6bb211d5d862c782efbbff978992257a86dd9152d3" exitCode=0 Feb 03 10:26:27 crc kubenswrapper[5010]: I0203 10:26:27.369988 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" event={"ID":"cab88b93-9009-49d9-8967-dc8f2b9a7244","Type":"ContainerDied","Data":"279c8b5f461c06f3191fbc6bb211d5d862c782efbbff978992257a86dd9152d3"} Feb 03 10:26:27 crc kubenswrapper[5010]: I0203 10:26:27.392472 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.392436666 podStartE2EDuration="8.392436666s" podCreationTimestamp="2026-02-03 10:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:26:27.389752347 +0000 UTC m=+1457.545728496" watchObservedRunningTime="2026-02-03 10:26:27.392436666 +0000 UTC m=+1457.548412815" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.044701 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d58b-account-create-update-p69h5" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.128439 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/122231ac-5000-44d7-a524-2df85da0abd4-operator-scripts\") pod \"122231ac-5000-44d7-a524-2df85da0abd4\" (UID: \"122231ac-5000-44d7-a524-2df85da0abd4\") " Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.128644 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8h8k\" (UniqueName: \"kubernetes.io/projected/122231ac-5000-44d7-a524-2df85da0abd4-kube-api-access-r8h8k\") pod \"122231ac-5000-44d7-a524-2df85da0abd4\" (UID: \"122231ac-5000-44d7-a524-2df85da0abd4\") " Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.130975 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/122231ac-5000-44d7-a524-2df85da0abd4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "122231ac-5000-44d7-a524-2df85da0abd4" (UID: "122231ac-5000-44d7-a524-2df85da0abd4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.138799 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/122231ac-5000-44d7-a524-2df85da0abd4-kube-api-access-r8h8k" (OuterVolumeSpecName: "kube-api-access-r8h8k") pod "122231ac-5000-44d7-a524-2df85da0abd4" (UID: "122231ac-5000-44d7-a524-2df85da0abd4"). InnerVolumeSpecName "kube-api-access-r8h8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.202603 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.227977 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.235190 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/122231ac-5000-44d7-a524-2df85da0abd4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.235250 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8h8k\" (UniqueName: \"kubernetes.io/projected/122231ac-5000-44d7-a524-2df85da0abd4-kube-api-access-r8h8k\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.262162 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dq6kw" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.279764 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qnsrk" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.337493 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4nh7\" (UniqueName: \"kubernetes.io/projected/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c-kube-api-access-j4nh7\") pod \"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c\" (UID: \"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c\") " Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.337653 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/307672c5-ae66-4af2-bbbb-1a59c58ee4b2-operator-scripts\") pod \"307672c5-ae66-4af2-bbbb-1a59c58ee4b2\" (UID: \"307672c5-ae66-4af2-bbbb-1a59c58ee4b2\") " Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.337695 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7s8r\" (UniqueName: \"kubernetes.io/projected/307672c5-ae66-4af2-bbbb-1a59c58ee4b2-kube-api-access-z7s8r\") pod \"307672c5-ae66-4af2-bbbb-1a59c58ee4b2\" (UID: \"307672c5-ae66-4af2-bbbb-1a59c58ee4b2\") " Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.337730 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c-operator-scripts\") pod \"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c\" (UID: \"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c\") " Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.342460 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/307672c5-ae66-4af2-bbbb-1a59c58ee4b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "307672c5-ae66-4af2-bbbb-1a59c58ee4b2" (UID: "307672c5-ae66-4af2-bbbb-1a59c58ee4b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.349335 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c-kube-api-access-j4nh7" (OuterVolumeSpecName: "kube-api-access-j4nh7") pod "26fff59b-fc6c-46b2-9cb6-9ad352b4e39c" (UID: "26fff59b-fc6c-46b2-9cb6-9ad352b4e39c"). InnerVolumeSpecName "kube-api-access-j4nh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.350648 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26fff59b-fc6c-46b2-9cb6-9ad352b4e39c" (UID: "26fff59b-fc6c-46b2-9cb6-9ad352b4e39c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.352637 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/307672c5-ae66-4af2-bbbb-1a59c58ee4b2-kube-api-access-z7s8r" (OuterVolumeSpecName: "kube-api-access-z7s8r") pod "307672c5-ae66-4af2-bbbb-1a59c58ee4b2" (UID: "307672c5-ae66-4af2-bbbb-1a59c58ee4b2"). InnerVolumeSpecName "kube-api-access-z7s8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.393272 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d58b-account-create-update-p69h5" event={"ID":"122231ac-5000-44d7-a524-2df85da0abd4","Type":"ContainerDied","Data":"d7c37356251ceab45787255a0bf11e8d0fb8d799a100064f75c795e909a8b233"} Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.393348 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7c37356251ceab45787255a0bf11e8d0fb8d799a100064f75c795e909a8b233" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.393442 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d58b-account-create-update-p69h5" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.409285 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qnsrk" event={"ID":"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c","Type":"ContainerDied","Data":"0ba4d23b4eba6d6e0c64a591720369c91d163b40cc0f86e50be5facff204aee1"} Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.409899 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ba4d23b4eba6d6e0c64a591720369c91d163b40cc0f86e50be5facff204aee1" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.409324 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qnsrk" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.431034 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dq6kw" event={"ID":"307672c5-ae66-4af2-bbbb-1a59c58ee4b2","Type":"ContainerDied","Data":"eee96a285511460543183b4fa28b6245bf21bbbd910269f1be813ccaf8a85b09"} Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.431506 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dq6kw" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.433916 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eee96a285511460543183b4fa28b6245bf21bbbd910269f1be813ccaf8a85b09" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.441166 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4nh7\" (UniqueName: \"kubernetes.io/projected/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c-kube-api-access-j4nh7\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.441328 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/307672c5-ae66-4af2-bbbb-1a59c58ee4b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.441346 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7s8r\" (UniqueName: \"kubernetes.io/projected/307672c5-ae66-4af2-bbbb-1a59c58ee4b2-kube-api-access-z7s8r\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.441357 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.850953 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.897468 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chd7x\" (UniqueName: \"kubernetes.io/projected/cab88b93-9009-49d9-8967-dc8f2b9a7244-kube-api-access-chd7x\") pod \"cab88b93-9009-49d9-8967-dc8f2b9a7244\" (UID: \"cab88b93-9009-49d9-8967-dc8f2b9a7244\") " Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.900207 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cab88b93-9009-49d9-8967-dc8f2b9a7244-operator-scripts\") pod \"cab88b93-9009-49d9-8967-dc8f2b9a7244\" (UID: \"cab88b93-9009-49d9-8967-dc8f2b9a7244\") " Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.903644 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cab88b93-9009-49d9-8967-dc8f2b9a7244-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cab88b93-9009-49d9-8967-dc8f2b9a7244" (UID: "cab88b93-9009-49d9-8967-dc8f2b9a7244"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.920860 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cab88b93-9009-49d9-8967-dc8f2b9a7244-kube-api-access-chd7x" (OuterVolumeSpecName: "kube-api-access-chd7x") pod "cab88b93-9009-49d9-8967-dc8f2b9a7244" (UID: "cab88b93-9009-49d9-8967-dc8f2b9a7244"). InnerVolumeSpecName "kube-api-access-chd7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.005312 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chd7x\" (UniqueName: \"kubernetes.io/projected/cab88b93-9009-49d9-8967-dc8f2b9a7244-kube-api-access-chd7x\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.005380 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cab88b93-9009-49d9-8967-dc8f2b9a7244-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.177065 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.186341 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fztcs" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.313357 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19aa5f54-6733-454e-a1cf-92ba62fc4068-operator-scripts\") pod \"19aa5f54-6733-454e-a1cf-92ba62fc4068\" (UID: \"19aa5f54-6733-454e-a1cf-92ba62fc4068\") " Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.313591 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khj9c\" (UniqueName: \"kubernetes.io/projected/6fac5d19-4577-4190-b626-83d0b42fd46d-kube-api-access-khj9c\") pod \"6fac5d19-4577-4190-b626-83d0b42fd46d\" (UID: \"6fac5d19-4577-4190-b626-83d0b42fd46d\") " Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.313804 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dcr7\" (UniqueName: \"kubernetes.io/projected/19aa5f54-6733-454e-a1cf-92ba62fc4068-kube-api-access-6dcr7\") pod \"19aa5f54-6733-454e-a1cf-92ba62fc4068\" (UID: \"19aa5f54-6733-454e-a1cf-92ba62fc4068\") " Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.313931 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fac5d19-4577-4190-b626-83d0b42fd46d-operator-scripts\") pod \"6fac5d19-4577-4190-b626-83d0b42fd46d\" (UID: \"6fac5d19-4577-4190-b626-83d0b42fd46d\") " Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.314196 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19aa5f54-6733-454e-a1cf-92ba62fc4068-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19aa5f54-6733-454e-a1cf-92ba62fc4068" (UID: "19aa5f54-6733-454e-a1cf-92ba62fc4068"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.314634 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19aa5f54-6733-454e-a1cf-92ba62fc4068-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.315098 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fac5d19-4577-4190-b626-83d0b42fd46d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fac5d19-4577-4190-b626-83d0b42fd46d" (UID: "6fac5d19-4577-4190-b626-83d0b42fd46d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.322880 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19aa5f54-6733-454e-a1cf-92ba62fc4068-kube-api-access-6dcr7" (OuterVolumeSpecName: "kube-api-access-6dcr7") pod "19aa5f54-6733-454e-a1cf-92ba62fc4068" (UID: "19aa5f54-6733-454e-a1cf-92ba62fc4068"). InnerVolumeSpecName "kube-api-access-6dcr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.329519 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fac5d19-4577-4190-b626-83d0b42fd46d-kube-api-access-khj9c" (OuterVolumeSpecName: "kube-api-access-khj9c") pod "6fac5d19-4577-4190-b626-83d0b42fd46d" (UID: "6fac5d19-4577-4190-b626-83d0b42fd46d"). InnerVolumeSpecName "kube-api-access-khj9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.417043 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dcr7\" (UniqueName: \"kubernetes.io/projected/19aa5f54-6733-454e-a1cf-92ba62fc4068-kube-api-access-6dcr7\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.417114 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fac5d19-4577-4190-b626-83d0b42fd46d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.417125 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khj9c\" (UniqueName: \"kubernetes.io/projected/6fac5d19-4577-4190-b626-83d0b42fd46d-kube-api-access-khj9c\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.452345 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fztcs" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.452354 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fztcs" event={"ID":"19aa5f54-6733-454e-a1cf-92ba62fc4068","Type":"ContainerDied","Data":"49558af84c27fd529f7f93b79b04100ed86805e41a3a8207cb74e5891388348f"} Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.452443 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49558af84c27fd529f7f93b79b04100ed86805e41a3a8207cb74e5891388348f" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.457942 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" event={"ID":"cab88b93-9009-49d9-8967-dc8f2b9a7244","Type":"ContainerDied","Data":"0a2358b435e4d2a2f42ee4e3e8fcbdc8cf21cbb007e9e788e0e3ad868a511b80"} Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.458443 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a2358b435e4d2a2f42ee4e3e8fcbdc8cf21cbb007e9e788e0e3ad868a511b80" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.458057 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.463906 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" event={"ID":"6fac5d19-4577-4190-b626-83d0b42fd46d","Type":"ContainerDied","Data":"e6e83b9fa88b18c5bf71a71896def34f7759be48f196cbee117b1b6d7fc1256f"} Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.463969 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6e83b9fa88b18c5bf71a71896def34f7759be48f196cbee117b1b6d7fc1256f" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.464045 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.327891 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.328567 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.399034 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.404911 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.467463 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.467554 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.496773 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c80632c0-72bc-461d-8e87-591d0ddbc1a8","Type":"ContainerStarted","Data":"17f0b34ebc4ff0a6df652cf57cfa5f25ce04e81690b49ed17ee73385232e443a"} Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.498718 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.498755 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.540296 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.540898 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.549191 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.373804076 podStartE2EDuration="33.549160261s" podCreationTimestamp="2026-02-03 10:25:57 +0000 UTC" firstStartedPulling="2026-02-03 10:25:58.851419434 +0000 UTC m=+1429.007395563" lastFinishedPulling="2026-02-03 10:26:30.026775619 +0000 UTC m=+1460.182751748" observedRunningTime="2026-02-03 10:26:30.523546554 +0000 UTC m=+1460.679522683" watchObservedRunningTime="2026-02-03 10:26:30.549160261 +0000 UTC m=+1460.705136401" Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.616797 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 03 10:26:31 crc kubenswrapper[5010]: I0203 10:26:31.508590 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:31 crc kubenswrapper[5010]: I0203 10:26:31.509839 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.022868 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.805572 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cdcd56868-k9h7g" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.962063 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gd6dz"] Feb 03 10:26:32 crc kubenswrapper[5010]: E0203 10:26:32.962713 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26fff59b-fc6c-46b2-9cb6-9ad352b4e39c" containerName="mariadb-database-create" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.962730 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="26fff59b-fc6c-46b2-9cb6-9ad352b4e39c" containerName="mariadb-database-create" Feb 03 10:26:32 crc kubenswrapper[5010]: E0203 10:26:32.962743 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fac5d19-4577-4190-b626-83d0b42fd46d" containerName="mariadb-account-create-update" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.962750 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fac5d19-4577-4190-b626-83d0b42fd46d" containerName="mariadb-account-create-update" Feb 03 10:26:32 crc kubenswrapper[5010]: E0203 10:26:32.962761 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="307672c5-ae66-4af2-bbbb-1a59c58ee4b2" containerName="mariadb-database-create" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.962768 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="307672c5-ae66-4af2-bbbb-1a59c58ee4b2" containerName="mariadb-database-create" Feb 03 10:26:32 crc kubenswrapper[5010]: E0203 10:26:32.962793 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cab88b93-9009-49d9-8967-dc8f2b9a7244" containerName="mariadb-account-create-update" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.962802 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="cab88b93-9009-49d9-8967-dc8f2b9a7244" containerName="mariadb-account-create-update" Feb 03 10:26:32 crc kubenswrapper[5010]: E0203 10:26:32.962828 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="122231ac-5000-44d7-a524-2df85da0abd4" containerName="mariadb-account-create-update" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.962835 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="122231ac-5000-44d7-a524-2df85da0abd4" containerName="mariadb-account-create-update" Feb 03 10:26:32 crc kubenswrapper[5010]: E0203 10:26:32.962863 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19aa5f54-6733-454e-a1cf-92ba62fc4068" containerName="mariadb-database-create" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.962873 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="19aa5f54-6733-454e-a1cf-92ba62fc4068" containerName="mariadb-database-create" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.963096 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="19aa5f54-6733-454e-a1cf-92ba62fc4068" containerName="mariadb-database-create" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.963110 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="307672c5-ae66-4af2-bbbb-1a59c58ee4b2" containerName="mariadb-database-create" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.963137 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="122231ac-5000-44d7-a524-2df85da0abd4" containerName="mariadb-account-create-update" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.963148 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fac5d19-4577-4190-b626-83d0b42fd46d" containerName="mariadb-account-create-update" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.963165 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="cab88b93-9009-49d9-8967-dc8f2b9a7244" containerName="mariadb-account-create-update" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.963181 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="26fff59b-fc6c-46b2-9cb6-9ad352b4e39c" containerName="mariadb-database-create" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.964115 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.967185 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-kdpzn" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.967517 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.990776 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.049716 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gd6dz"] Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.126103 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cc988db4-2mpfb" podUID="2fedcc57-b16c-4177-a10e-f627269b4adb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.132095 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t4sz\" (UniqueName: \"kubernetes.io/projected/49ca9130-4a3c-4c64-8557-5c5e29df551d-kube-api-access-7t4sz\") pod \"nova-cell0-conductor-db-sync-gd6dz\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.132434 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-scripts\") pod \"nova-cell0-conductor-db-sync-gd6dz\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.132504 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-config-data\") pod \"nova-cell0-conductor-db-sync-gd6dz\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.132531 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gd6dz\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.235050 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-config-data\") pod \"nova-cell0-conductor-db-sync-gd6dz\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.235118 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gd6dz\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.235155 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t4sz\" (UniqueName: \"kubernetes.io/projected/49ca9130-4a3c-4c64-8557-5c5e29df551d-kube-api-access-7t4sz\") pod \"nova-cell0-conductor-db-sync-gd6dz\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.235385 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-scripts\") pod \"nova-cell0-conductor-db-sync-gd6dz\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.251529 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-scripts\") pod \"nova-cell0-conductor-db-sync-gd6dz\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.260169 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gd6dz\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.260495 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-config-data\") pod \"nova-cell0-conductor-db-sync-gd6dz\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.270331 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t4sz\" (UniqueName: \"kubernetes.io/projected/49ca9130-4a3c-4c64-8557-5c5e29df551d-kube-api-access-7t4sz\") pod \"nova-cell0-conductor-db-sync-gd6dz\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.295399 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.541180 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.542117 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:26:34 crc kubenswrapper[5010]: I0203 10:26:34.159543 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gd6dz"] Feb 03 10:26:34 crc kubenswrapper[5010]: I0203 10:26:34.574479 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gd6dz" event={"ID":"49ca9130-4a3c-4c64-8557-5c5e29df551d","Type":"ContainerStarted","Data":"0adb2c17444ab86300890aee767fdf0a4d7295fac27461d1c7107972deeb4e36"} Feb 03 10:26:34 crc kubenswrapper[5010]: I0203 10:26:34.666332 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 03 10:26:35 crc kubenswrapper[5010]: I0203 10:26:35.901378 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:35 crc kubenswrapper[5010]: I0203 10:26:35.902064 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:26:36 crc kubenswrapper[5010]: I0203 10:26:36.085649 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:37 crc kubenswrapper[5010]: I0203 10:26:37.697451 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 03 10:26:42 crc kubenswrapper[5010]: I0203 10:26:42.807235 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cdcd56868-k9h7g" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Feb 03 10:26:43 crc kubenswrapper[5010]: I0203 10:26:43.134442 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cc988db4-2mpfb" podUID="2fedcc57-b16c-4177-a10e-f627269b4adb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 03 10:26:48 crc kubenswrapper[5010]: I0203 10:26:48.506143 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="3ef87127-760d-4f81-8a78-a06d074c7ec3" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 10:26:48 crc kubenswrapper[5010]: I0203 10:26:48.506166 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="3ef87127-760d-4f81-8a78-a06d074c7ec3" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 10:26:48 crc kubenswrapper[5010]: I0203 10:26:48.893580 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gd6dz" event={"ID":"49ca9130-4a3c-4c64-8557-5c5e29df551d","Type":"ContainerStarted","Data":"529624536a7c99d14d746a21069148e69bbb624ecc0d005496493ce4e1241033"} Feb 03 10:26:49 crc kubenswrapper[5010]: I0203 10:26:49.916456 5010 generic.go:334] "Generic (PLEG): container finished" podID="4909daad-030c-436e-acf5-2405a74d8180" containerID="204ff7b5906df6362a9178ddb04b60b73173622cbd63d2c7b2264912f116e282" exitCode=137 Feb 03 10:26:49 crc kubenswrapper[5010]: I0203 10:26:49.916549 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4909daad-030c-436e-acf5-2405a74d8180","Type":"ContainerDied","Data":"204ff7b5906df6362a9178ddb04b60b73173622cbd63d2c7b2264912f116e282"} Feb 03 10:26:49 crc kubenswrapper[5010]: I0203 10:26:49.917048 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4909daad-030c-436e-acf5-2405a74d8180","Type":"ContainerDied","Data":"9bf689dea05fc0f3ed74b115d13e839aab5eee31fcc1462d9040ce5ddfa67010"} Feb 03 10:26:49 crc kubenswrapper[5010]: I0203 10:26:49.917065 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bf689dea05fc0f3ed74b115d13e839aab5eee31fcc1462d9040ce5ddfa67010" Feb 03 10:26:49 crc kubenswrapper[5010]: I0203 10:26:49.957812 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:26:49 crc kubenswrapper[5010]: I0203 10:26:49.995047 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-gd6dz" podStartSLOduration=4.413292846 podStartE2EDuration="17.995008676s" podCreationTimestamp="2026-02-03 10:26:32 +0000 UTC" firstStartedPulling="2026-02-03 10:26:34.11373719 +0000 UTC m=+1464.269713319" lastFinishedPulling="2026-02-03 10:26:47.69545301 +0000 UTC m=+1477.851429149" observedRunningTime="2026-02-03 10:26:48.923872432 +0000 UTC m=+1479.079848581" watchObservedRunningTime="2026-02-03 10:26:49.995008676 +0000 UTC m=+1480.150984805" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.130922 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4909daad-030c-436e-acf5-2405a74d8180-run-httpd\") pod \"4909daad-030c-436e-acf5-2405a74d8180\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.131058 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-combined-ca-bundle\") pod \"4909daad-030c-436e-acf5-2405a74d8180\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.131207 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vs4n\" (UniqueName: \"kubernetes.io/projected/4909daad-030c-436e-acf5-2405a74d8180-kube-api-access-4vs4n\") pod \"4909daad-030c-436e-acf5-2405a74d8180\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.131416 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-config-data\") pod \"4909daad-030c-436e-acf5-2405a74d8180\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.131491 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4909daad-030c-436e-acf5-2405a74d8180-log-httpd\") pod \"4909daad-030c-436e-acf5-2405a74d8180\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.131698 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-sg-core-conf-yaml\") pod \"4909daad-030c-436e-acf5-2405a74d8180\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.131754 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-scripts\") pod \"4909daad-030c-436e-acf5-2405a74d8180\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.132099 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4909daad-030c-436e-acf5-2405a74d8180-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4909daad-030c-436e-acf5-2405a74d8180" (UID: "4909daad-030c-436e-acf5-2405a74d8180"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.132729 5010 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4909daad-030c-436e-acf5-2405a74d8180-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.133701 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4909daad-030c-436e-acf5-2405a74d8180-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4909daad-030c-436e-acf5-2405a74d8180" (UID: "4909daad-030c-436e-acf5-2405a74d8180"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.140984 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4909daad-030c-436e-acf5-2405a74d8180-kube-api-access-4vs4n" (OuterVolumeSpecName: "kube-api-access-4vs4n") pod "4909daad-030c-436e-acf5-2405a74d8180" (UID: "4909daad-030c-436e-acf5-2405a74d8180"). InnerVolumeSpecName "kube-api-access-4vs4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.141139 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-scripts" (OuterVolumeSpecName: "scripts") pod "4909daad-030c-436e-acf5-2405a74d8180" (UID: "4909daad-030c-436e-acf5-2405a74d8180"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.174138 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4909daad-030c-436e-acf5-2405a74d8180" (UID: "4909daad-030c-436e-acf5-2405a74d8180"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.236401 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4909daad-030c-436e-acf5-2405a74d8180" (UID: "4909daad-030c-436e-acf5-2405a74d8180"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.237432 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-combined-ca-bundle\") pod \"4909daad-030c-436e-acf5-2405a74d8180\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.238856 5010 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.238902 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.238923 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vs4n\" (UniqueName: \"kubernetes.io/projected/4909daad-030c-436e-acf5-2405a74d8180-kube-api-access-4vs4n\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.238941 5010 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4909daad-030c-436e-acf5-2405a74d8180-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:50 crc kubenswrapper[5010]: W0203 10:26:50.239088 5010 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/4909daad-030c-436e-acf5-2405a74d8180/volumes/kubernetes.io~secret/combined-ca-bundle Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.239156 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4909daad-030c-436e-acf5-2405a74d8180" (UID: "4909daad-030c-436e-acf5-2405a74d8180"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.283641 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-config-data" (OuterVolumeSpecName: "config-data") pod "4909daad-030c-436e-acf5-2405a74d8180" (UID: "4909daad-030c-436e-acf5-2405a74d8180"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.343527 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.343583 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.930012 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.967569 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.978017 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.008945 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:26:51 crc kubenswrapper[5010]: E0203 10:26:51.009757 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="sg-core" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.009792 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="sg-core" Feb 03 10:26:51 crc kubenswrapper[5010]: E0203 10:26:51.009817 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="ceilometer-central-agent" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.009828 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="ceilometer-central-agent" Feb 03 10:26:51 crc kubenswrapper[5010]: E0203 10:26:51.009843 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="proxy-httpd" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.009853 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="proxy-httpd" Feb 03 10:26:51 crc kubenswrapper[5010]: E0203 10:26:51.009865 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="ceilometer-notification-agent" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.009874 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="ceilometer-notification-agent" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.010154 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="ceilometer-central-agent" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.010189 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="proxy-httpd" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.010202 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="ceilometer-notification-agent" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.010237 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="sg-core" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.013062 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.017661 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.018019 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.053474 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.164622 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vldqz\" (UniqueName: \"kubernetes.io/projected/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-kube-api-access-vldqz\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.164691 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-run-httpd\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.164742 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.164760 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.164795 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-log-httpd\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.164817 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-scripts\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.164912 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-config-data\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.267031 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vldqz\" (UniqueName: \"kubernetes.io/projected/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-kube-api-access-vldqz\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.267154 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-run-httpd\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.267193 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.267230 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.267262 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-log-httpd\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.267285 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-scripts\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.267372 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-config-data\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.268030 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-run-httpd\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.268484 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-log-httpd\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.279902 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.285696 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.285783 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-scripts\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.293136 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-config-data\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.297523 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vldqz\" (UniqueName: \"kubernetes.io/projected/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-kube-api-access-vldqz\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.361358 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.960042 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:26:52 crc kubenswrapper[5010]: I0203 10:26:52.517411 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4909daad-030c-436e-acf5-2405a74d8180" path="/var/lib/kubelet/pods/4909daad-030c-436e-acf5-2405a74d8180/volumes" Feb 03 10:26:52 crc kubenswrapper[5010]: I0203 10:26:52.805071 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cdcd56868-k9h7g" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Feb 03 10:26:52 crc kubenswrapper[5010]: I0203 10:26:52.805206 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:26:52 crc kubenswrapper[5010]: I0203 10:26:52.806646 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"4e9bc8f0d6381cd12e012dcf3fe06eb0672b376af0b818c286309997a48dc607"} pod="openstack/horizon-7cdcd56868-k9h7g" containerMessage="Container horizon failed startup probe, will be restarted" Feb 03 10:26:52 crc kubenswrapper[5010]: I0203 10:26:52.806710 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7cdcd56868-k9h7g" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" containerID="cri-o://4e9bc8f0d6381cd12e012dcf3fe06eb0672b376af0b818c286309997a48dc607" gracePeriod=30 Feb 03 10:26:53 crc kubenswrapper[5010]: I0203 10:26:53.006042 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a","Type":"ContainerStarted","Data":"fa75ed4d16d9d22ec602a49ea9072fdf61887d1412cdd02f5aaf820516fa7e39"} Feb 03 10:26:54 crc kubenswrapper[5010]: I0203 10:26:54.020468 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a","Type":"ContainerStarted","Data":"07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35"} Feb 03 10:26:54 crc kubenswrapper[5010]: I0203 10:26:54.021023 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a","Type":"ContainerStarted","Data":"75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86"} Feb 03 10:26:55 crc kubenswrapper[5010]: I0203 10:26:55.051124 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a","Type":"ContainerStarted","Data":"2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988"} Feb 03 10:26:56 crc kubenswrapper[5010]: I0203 10:26:56.661344 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:26:57 crc kubenswrapper[5010]: I0203 10:26:57.078857 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a","Type":"ContainerStarted","Data":"10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4"} Feb 03 10:26:57 crc kubenswrapper[5010]: I0203 10:26:57.080931 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 10:26:57 crc kubenswrapper[5010]: I0203 10:26:57.122763 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.546346967 podStartE2EDuration="7.122730735s" podCreationTimestamp="2026-02-03 10:26:50 +0000 UTC" firstStartedPulling="2026-02-03 10:26:52.007586193 +0000 UTC m=+1482.163562332" lastFinishedPulling="2026-02-03 10:26:56.583969971 +0000 UTC m=+1486.739946100" observedRunningTime="2026-02-03 10:26:57.11903733 +0000 UTC m=+1487.275013479" watchObservedRunningTime="2026-02-03 10:26:57.122730735 +0000 UTC m=+1487.278706854" Feb 03 10:26:59 crc kubenswrapper[5010]: I0203 10:26:59.075103 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:26:59 crc kubenswrapper[5010]: I0203 10:26:59.201121 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7cdcd56868-k9h7g"] Feb 03 10:27:02 crc kubenswrapper[5010]: I0203 10:27:02.317875 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:27:02 crc kubenswrapper[5010]: I0203 10:27:02.318565 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="ceilometer-central-agent" containerID="cri-o://75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86" gracePeriod=30 Feb 03 10:27:02 crc kubenswrapper[5010]: I0203 10:27:02.318607 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="sg-core" containerID="cri-o://2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988" gracePeriod=30 Feb 03 10:27:02 crc kubenswrapper[5010]: I0203 10:27:02.318675 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="proxy-httpd" containerID="cri-o://10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4" gracePeriod=30 Feb 03 10:27:02 crc kubenswrapper[5010]: I0203 10:27:02.318712 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="ceilometer-notification-agent" containerID="cri-o://07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35" gracePeriod=30 Feb 03 10:27:03 crc kubenswrapper[5010]: I0203 10:27:03.202985 5010 generic.go:334] "Generic (PLEG): container finished" podID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerID="10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4" exitCode=0 Feb 03 10:27:03 crc kubenswrapper[5010]: I0203 10:27:03.203038 5010 generic.go:334] "Generic (PLEG): container finished" podID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerID="2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988" exitCode=2 Feb 03 10:27:03 crc kubenswrapper[5010]: I0203 10:27:03.203048 5010 generic.go:334] "Generic (PLEG): container finished" podID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerID="07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35" exitCode=0 Feb 03 10:27:03 crc kubenswrapper[5010]: I0203 10:27:03.203078 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a","Type":"ContainerDied","Data":"10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4"} Feb 03 10:27:03 crc kubenswrapper[5010]: I0203 10:27:03.203164 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a","Type":"ContainerDied","Data":"2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988"} Feb 03 10:27:03 crc kubenswrapper[5010]: I0203 10:27:03.203186 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a","Type":"ContainerDied","Data":"07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35"} Feb 03 10:27:07 crc kubenswrapper[5010]: I0203 10:27:07.258751 5010 generic.go:334] "Generic (PLEG): container finished" podID="49ca9130-4a3c-4c64-8557-5c5e29df551d" containerID="529624536a7c99d14d746a21069148e69bbb624ecc0d005496493ce4e1241033" exitCode=0 Feb 03 10:27:07 crc kubenswrapper[5010]: I0203 10:27:07.258838 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gd6dz" event={"ID":"49ca9130-4a3c-4c64-8557-5c5e29df551d","Type":"ContainerDied","Data":"529624536a7c99d14d746a21069148e69bbb624ecc0d005496493ce4e1241033"} Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.252962 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.281526 5010 generic.go:334] "Generic (PLEG): container finished" podID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerID="75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86" exitCode=0 Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.281617 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a","Type":"ContainerDied","Data":"75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86"} Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.281717 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a","Type":"ContainerDied","Data":"fa75ed4d16d9d22ec602a49ea9072fdf61887d1412cdd02f5aaf820516fa7e39"} Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.281753 5010 scope.go:117] "RemoveContainer" containerID="10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.281749 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.318593 5010 scope.go:117] "RemoveContainer" containerID="2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.351226 5010 scope.go:117] "RemoveContainer" containerID="07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.360138 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-log-httpd\") pod \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.360262 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-config-data\") pod \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.360310 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-run-httpd\") pod \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.360377 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-combined-ca-bundle\") pod \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.360415 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-sg-core-conf-yaml\") pod \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.360519 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vldqz\" (UniqueName: \"kubernetes.io/projected/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-kube-api-access-vldqz\") pod \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.360653 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-scripts\") pod \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.360866 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" (UID: "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.360894 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" (UID: "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.361295 5010 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.361318 5010 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.368743 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-scripts" (OuterVolumeSpecName: "scripts") pod "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" (UID: "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.373616 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-kube-api-access-vldqz" (OuterVolumeSpecName: "kube-api-access-vldqz") pod "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" (UID: "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a"). InnerVolumeSpecName "kube-api-access-vldqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.386601 5010 scope.go:117] "RemoveContainer" containerID="75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.444243 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" (UID: "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.463579 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vldqz\" (UniqueName: \"kubernetes.io/projected/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-kube-api-access-vldqz\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.463616 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.463630 5010 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.486393 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" (UID: "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.511849 5010 scope.go:117] "RemoveContainer" containerID="10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4" Feb 03 10:27:08 crc kubenswrapper[5010]: E0203 10:27:08.516723 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4\": container with ID starting with 10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4 not found: ID does not exist" containerID="10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.516808 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4"} err="failed to get container status \"10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4\": rpc error: code = NotFound desc = could not find container \"10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4\": container with ID starting with 10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4 not found: ID does not exist" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.516870 5010 scope.go:117] "RemoveContainer" containerID="2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988" Feb 03 10:27:08 crc kubenswrapper[5010]: E0203 10:27:08.519685 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988\": container with ID starting with 2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988 not found: ID does not exist" containerID="2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.519779 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988"} err="failed to get container status \"2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988\": rpc error: code = NotFound desc = could not find container \"2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988\": container with ID starting with 2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988 not found: ID does not exist" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.519824 5010 scope.go:117] "RemoveContainer" containerID="07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35" Feb 03 10:27:08 crc kubenswrapper[5010]: E0203 10:27:08.520407 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35\": container with ID starting with 07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35 not found: ID does not exist" containerID="07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.520430 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35"} err="failed to get container status \"07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35\": rpc error: code = NotFound desc = could not find container \"07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35\": container with ID starting with 07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35 not found: ID does not exist" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.520443 5010 scope.go:117] "RemoveContainer" containerID="75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86" Feb 03 10:27:08 crc kubenswrapper[5010]: E0203 10:27:08.520767 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86\": container with ID starting with 75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86 not found: ID does not exist" containerID="75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.520794 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86"} err="failed to get container status \"75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86\": rpc error: code = NotFound desc = could not find container \"75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86\": container with ID starting with 75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86 not found: ID does not exist" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.525432 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-config-data" (OuterVolumeSpecName: "config-data") pod "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" (UID: "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.566070 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.566130 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.620324 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.631901 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.637072 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.650605 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:27:08 crc kubenswrapper[5010]: E0203 10:27:08.651719 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ca9130-4a3c-4c64-8557-5c5e29df551d" containerName="nova-cell0-conductor-db-sync" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.651756 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ca9130-4a3c-4c64-8557-5c5e29df551d" containerName="nova-cell0-conductor-db-sync" Feb 03 10:27:08 crc kubenswrapper[5010]: E0203 10:27:08.651781 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="ceilometer-notification-agent" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.651790 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="ceilometer-notification-agent" Feb 03 10:27:08 crc kubenswrapper[5010]: E0203 10:27:08.651818 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="proxy-httpd" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.651831 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="proxy-httpd" Feb 03 10:27:08 crc kubenswrapper[5010]: E0203 10:27:08.651846 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="sg-core" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.651855 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="sg-core" Feb 03 10:27:08 crc kubenswrapper[5010]: E0203 10:27:08.651886 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="ceilometer-central-agent" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.651897 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="ceilometer-central-agent" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.652275 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="proxy-httpd" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.652305 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="49ca9130-4a3c-4c64-8557-5c5e29df551d" containerName="nova-cell0-conductor-db-sync" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.652328 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="ceilometer-central-agent" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.652342 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="ceilometer-notification-agent" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.652358 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="sg-core" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.658392 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.661677 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.662110 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.674051 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.770717 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-scripts\") pod \"49ca9130-4a3c-4c64-8557-5c5e29df551d\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.770958 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t4sz\" (UniqueName: \"kubernetes.io/projected/49ca9130-4a3c-4c64-8557-5c5e29df551d-kube-api-access-7t4sz\") pod \"49ca9130-4a3c-4c64-8557-5c5e29df551d\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.771118 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-combined-ca-bundle\") pod \"49ca9130-4a3c-4c64-8557-5c5e29df551d\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.771339 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-config-data\") pod \"49ca9130-4a3c-4c64-8557-5c5e29df551d\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.771814 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mzfj\" (UniqueName: \"kubernetes.io/projected/07964b2d-a893-46b5-a01d-c479361c0d37-kube-api-access-2mzfj\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.771896 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07964b2d-a893-46b5-a01d-c479361c0d37-run-httpd\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.771958 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-config-data\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.772175 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.772231 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-scripts\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.772274 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.772333 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07964b2d-a893-46b5-a01d-c479361c0d37-log-httpd\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.775592 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-scripts" (OuterVolumeSpecName: "scripts") pod "49ca9130-4a3c-4c64-8557-5c5e29df551d" (UID: "49ca9130-4a3c-4c64-8557-5c5e29df551d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.776046 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ca9130-4a3c-4c64-8557-5c5e29df551d-kube-api-access-7t4sz" (OuterVolumeSpecName: "kube-api-access-7t4sz") pod "49ca9130-4a3c-4c64-8557-5c5e29df551d" (UID: "49ca9130-4a3c-4c64-8557-5c5e29df551d"). InnerVolumeSpecName "kube-api-access-7t4sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.801987 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49ca9130-4a3c-4c64-8557-5c5e29df551d" (UID: "49ca9130-4a3c-4c64-8557-5c5e29df551d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.805333 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-config-data" (OuterVolumeSpecName: "config-data") pod "49ca9130-4a3c-4c64-8557-5c5e29df551d" (UID: "49ca9130-4a3c-4c64-8557-5c5e29df551d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.874905 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.875018 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-scripts\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.875068 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.875132 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07964b2d-a893-46b5-a01d-c479361c0d37-log-httpd\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.875184 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mzfj\" (UniqueName: \"kubernetes.io/projected/07964b2d-a893-46b5-a01d-c479361c0d37-kube-api-access-2mzfj\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.875269 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07964b2d-a893-46b5-a01d-c479361c0d37-run-httpd\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.875333 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-config-data\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.875464 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.875483 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.875499 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t4sz\" (UniqueName: \"kubernetes.io/projected/49ca9130-4a3c-4c64-8557-5c5e29df551d-kube-api-access-7t4sz\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.875514 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.876680 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07964b2d-a893-46b5-a01d-c479361c0d37-log-httpd\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.879307 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07964b2d-a893-46b5-a01d-c479361c0d37-run-httpd\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.880817 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.880932 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-config-data\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.881026 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.882634 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-scripts\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.897797 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mzfj\" (UniqueName: \"kubernetes.io/projected/07964b2d-a893-46b5-a01d-c479361c0d37-kube-api-access-2mzfj\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.990481 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.299488 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gd6dz" event={"ID":"49ca9130-4a3c-4c64-8557-5c5e29df551d","Type":"ContainerDied","Data":"0adb2c17444ab86300890aee767fdf0a4d7295fac27461d1c7107972deeb4e36"} Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.299841 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0adb2c17444ab86300890aee767fdf0a4d7295fac27461d1c7107972deeb4e36" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.299541 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.444091 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.448040 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.454420 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-kdpzn" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.455394 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.486286 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.493791 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26dec936-0343-4d5f-8f2b-cf2a797786b5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"26dec936-0343-4d5f-8f2b-cf2a797786b5\") " pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.493893 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26dec936-0343-4d5f-8f2b-cf2a797786b5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"26dec936-0343-4d5f-8f2b-cf2a797786b5\") " pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.493967 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k88pp\" (UniqueName: \"kubernetes.io/projected/26dec936-0343-4d5f-8f2b-cf2a797786b5-kube-api-access-k88pp\") pod \"nova-cell0-conductor-0\" (UID: \"26dec936-0343-4d5f-8f2b-cf2a797786b5\") " pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.554843 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.596770 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26dec936-0343-4d5f-8f2b-cf2a797786b5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"26dec936-0343-4d5f-8f2b-cf2a797786b5\") " pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.596870 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26dec936-0343-4d5f-8f2b-cf2a797786b5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"26dec936-0343-4d5f-8f2b-cf2a797786b5\") " pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.596930 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k88pp\" (UniqueName: \"kubernetes.io/projected/26dec936-0343-4d5f-8f2b-cf2a797786b5-kube-api-access-k88pp\") pod \"nova-cell0-conductor-0\" (UID: \"26dec936-0343-4d5f-8f2b-cf2a797786b5\") " pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.605208 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26dec936-0343-4d5f-8f2b-cf2a797786b5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"26dec936-0343-4d5f-8f2b-cf2a797786b5\") " pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.606865 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26dec936-0343-4d5f-8f2b-cf2a797786b5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"26dec936-0343-4d5f-8f2b-cf2a797786b5\") " pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.617591 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k88pp\" (UniqueName: \"kubernetes.io/projected/26dec936-0343-4d5f-8f2b-cf2a797786b5-kube-api-access-k88pp\") pod \"nova-cell0-conductor-0\" (UID: \"26dec936-0343-4d5f-8f2b-cf2a797786b5\") " pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.790179 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:10 crc kubenswrapper[5010]: I0203 10:27:10.320697 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07964b2d-a893-46b5-a01d-c479361c0d37","Type":"ContainerStarted","Data":"cd6841d336caf71fc510297facb1277599cbdeca80d5b944442ca08505d329ae"} Feb 03 10:27:10 crc kubenswrapper[5010]: I0203 10:27:10.331792 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 10:27:10 crc kubenswrapper[5010]: I0203 10:27:10.520802 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" path="/var/lib/kubelet/pods/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a/volumes" Feb 03 10:27:11 crc kubenswrapper[5010]: I0203 10:27:11.334441 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"26dec936-0343-4d5f-8f2b-cf2a797786b5","Type":"ContainerStarted","Data":"294e87c7889391f0b738633cfb50158a96d7c8fa5e589924d23c5e027c882204"} Feb 03 10:27:11 crc kubenswrapper[5010]: I0203 10:27:11.335015 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"26dec936-0343-4d5f-8f2b-cf2a797786b5","Type":"ContainerStarted","Data":"90dc7ccf86efebfda76973b3da7ae5f518b3f3eb365eb4de3b95d035762bfb99"} Feb 03 10:27:11 crc kubenswrapper[5010]: I0203 10:27:11.337242 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:11 crc kubenswrapper[5010]: I0203 10:27:11.339140 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07964b2d-a893-46b5-a01d-c479361c0d37","Type":"ContainerStarted","Data":"bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee"} Feb 03 10:27:11 crc kubenswrapper[5010]: I0203 10:27:11.382614 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.382572767 podStartE2EDuration="2.382572767s" podCreationTimestamp="2026-02-03 10:27:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:27:11.369919282 +0000 UTC m=+1501.525895411" watchObservedRunningTime="2026-02-03 10:27:11.382572767 +0000 UTC m=+1501.538548896" Feb 03 10:27:13 crc kubenswrapper[5010]: I0203 10:27:13.368574 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07964b2d-a893-46b5-a01d-c479361c0d37","Type":"ContainerStarted","Data":"9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd"} Feb 03 10:27:14 crc kubenswrapper[5010]: I0203 10:27:14.385252 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07964b2d-a893-46b5-a01d-c479361c0d37","Type":"ContainerStarted","Data":"f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28"} Feb 03 10:27:16 crc kubenswrapper[5010]: I0203 10:27:16.390729 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:27:16 crc kubenswrapper[5010]: I0203 10:27:16.391631 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:27:16 crc kubenswrapper[5010]: I0203 10:27:16.422896 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07964b2d-a893-46b5-a01d-c479361c0d37","Type":"ContainerStarted","Data":"7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e"} Feb 03 10:27:16 crc kubenswrapper[5010]: I0203 10:27:16.426532 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 10:27:16 crc kubenswrapper[5010]: I0203 10:27:16.468183 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.251652198 podStartE2EDuration="8.468134829s" podCreationTimestamp="2026-02-03 10:27:08 +0000 UTC" firstStartedPulling="2026-02-03 10:27:09.536345622 +0000 UTC m=+1499.692321751" lastFinishedPulling="2026-02-03 10:27:15.752828253 +0000 UTC m=+1505.908804382" observedRunningTime="2026-02-03 10:27:16.466151908 +0000 UTC m=+1506.622128027" watchObservedRunningTime="2026-02-03 10:27:16.468134829 +0000 UTC m=+1506.624110978" Feb 03 10:27:19 crc kubenswrapper[5010]: I0203 10:27:19.829129 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.392466 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-bqztf"] Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.394565 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.400836 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.401261 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.416309 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-bqztf"] Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.441893 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjhgc\" (UniqueName: \"kubernetes.io/projected/bd352716-06a1-47da-9d5d-179bfed70cbe-kube-api-access-jjhgc\") pod \"nova-cell0-cell-mapping-bqztf\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.442105 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-scripts\") pod \"nova-cell0-cell-mapping-bqztf\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.442235 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-config-data\") pod \"nova-cell0-cell-mapping-bqztf\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.442317 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bqztf\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.545557 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-scripts\") pod \"nova-cell0-cell-mapping-bqztf\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.546286 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-config-data\") pod \"nova-cell0-cell-mapping-bqztf\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.546383 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bqztf\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.546599 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjhgc\" (UniqueName: \"kubernetes.io/projected/bd352716-06a1-47da-9d5d-179bfed70cbe-kube-api-access-jjhgc\") pod \"nova-cell0-cell-mapping-bqztf\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.555057 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-scripts\") pod \"nova-cell0-cell-mapping-bqztf\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.556580 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bqztf\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.568447 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-config-data\") pod \"nova-cell0-cell-mapping-bqztf\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.579174 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjhgc\" (UniqueName: \"kubernetes.io/projected/bd352716-06a1-47da-9d5d-179bfed70cbe-kube-api-access-jjhgc\") pod \"nova-cell0-cell-mapping-bqztf\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.726422 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.729623 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.735843 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.737613 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.750965 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.820804 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.827081 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.840449 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.853665 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.867956 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae76c0d-99bf-42f4-8678-5c1693262ecc-config-data\") pod \"nova-api-0\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " pod="openstack/nova-api-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.868044 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.868114 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dae76c0d-99bf-42f4-8678-5c1693262ecc-logs\") pod \"nova-api-0\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " pod="openstack/nova-api-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.868140 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae76c0d-99bf-42f4-8678-5c1693262ecc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " pod="openstack/nova-api-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.868174 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-config-data\") pod \"nova-scheduler-0\" (UID: \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.868230 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srb2s\" (UniqueName: \"kubernetes.io/projected/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-kube-api-access-srb2s\") pod \"nova-scheduler-0\" (UID: \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.868268 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndfcm\" (UniqueName: \"kubernetes.io/projected/dae76c0d-99bf-42f4-8678-5c1693262ecc-kube-api-access-ndfcm\") pod \"nova-api-0\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " pod="openstack/nova-api-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.881127 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.885462 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.889703 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.910325 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.972112 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx5mj\" (UniqueName: \"kubernetes.io/projected/4df0ad18-8721-40ef-91bc-c609d61f1c1b-kube-api-access-wx5mj\") pod \"nova-cell1-novncproxy-0\" (UID: \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.972192 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srb2s\" (UniqueName: \"kubernetes.io/projected/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-kube-api-access-srb2s\") pod \"nova-scheduler-0\" (UID: \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.972273 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndfcm\" (UniqueName: \"kubernetes.io/projected/dae76c0d-99bf-42f4-8678-5c1693262ecc-kube-api-access-ndfcm\") pod \"nova-api-0\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " pod="openstack/nova-api-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.972371 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df0ad18-8721-40ef-91bc-c609d61f1c1b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.972445 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae76c0d-99bf-42f4-8678-5c1693262ecc-config-data\") pod \"nova-api-0\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " pod="openstack/nova-api-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.972528 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.972654 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df0ad18-8721-40ef-91bc-c609d61f1c1b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.972752 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dae76c0d-99bf-42f4-8678-5c1693262ecc-logs\") pod \"nova-api-0\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " pod="openstack/nova-api-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.972799 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae76c0d-99bf-42f4-8678-5c1693262ecc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " pod="openstack/nova-api-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.972879 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-config-data\") pod \"nova-scheduler-0\" (UID: \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.982939 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dae76c0d-99bf-42f4-8678-5c1693262ecc-logs\") pod \"nova-api-0\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " pod="openstack/nova-api-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.985387 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae76c0d-99bf-42f4-8678-5c1693262ecc-config-data\") pod \"nova-api-0\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " pod="openstack/nova-api-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.987255 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.016761 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae76c0d-99bf-42f4-8678-5c1693262ecc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " pod="openstack/nova-api-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.021051 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndfcm\" (UniqueName: \"kubernetes.io/projected/dae76c0d-99bf-42f4-8678-5c1693262ecc-kube-api-access-ndfcm\") pod \"nova-api-0\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " pod="openstack/nova-api-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.021104 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-config-data\") pod \"nova-scheduler-0\" (UID: \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.029913 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srb2s\" (UniqueName: \"kubernetes.io/projected/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-kube-api-access-srb2s\") pod \"nova-scheduler-0\" (UID: \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.060110 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.079151 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df0ad18-8721-40ef-91bc-c609d61f1c1b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.079299 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx5mj\" (UniqueName: \"kubernetes.io/projected/4df0ad18-8721-40ef-91bc-c609d61f1c1b-kube-api-access-wx5mj\") pod \"nova-cell1-novncproxy-0\" (UID: \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.079402 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df0ad18-8721-40ef-91bc-c609d61f1c1b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.088777 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df0ad18-8721-40ef-91bc-c609d61f1c1b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.091773 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df0ad18-8721-40ef-91bc-c609d61f1c1b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.155892 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.158051 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.182776 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx5mj\" (UniqueName: \"kubernetes.io/projected/4df0ad18-8721-40ef-91bc-c609d61f1c1b-kube-api-access-wx5mj\") pod \"nova-cell1-novncproxy-0\" (UID: \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.182970 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqcdz\" (UniqueName: \"kubernetes.io/projected/7e9abb34-c41e-4b86-835c-1107ad5eec49-kube-api-access-tqcdz\") pod \"nova-metadata-0\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.183045 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e9abb34-c41e-4b86-835c-1107ad5eec49-logs\") pod \"nova-metadata-0\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.183118 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e9abb34-c41e-4b86-835c-1107ad5eec49-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.183168 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e9abb34-c41e-4b86-835c-1107ad5eec49-config-data\") pod \"nova-metadata-0\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.183731 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.231881 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.243280 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.245803 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.299622 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-x25nd"] Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.301475 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.301639 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e9abb34-c41e-4b86-835c-1107ad5eec49-config-data\") pod \"nova-metadata-0\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.302031 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqcdz\" (UniqueName: \"kubernetes.io/projected/7e9abb34-c41e-4b86-835c-1107ad5eec49-kube-api-access-tqcdz\") pod \"nova-metadata-0\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.302138 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e9abb34-c41e-4b86-835c-1107ad5eec49-logs\") pod \"nova-metadata-0\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.302396 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e9abb34-c41e-4b86-835c-1107ad5eec49-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.303539 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e9abb34-c41e-4b86-835c-1107ad5eec49-logs\") pod \"nova-metadata-0\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.340799 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e9abb34-c41e-4b86-835c-1107ad5eec49-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.349793 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqcdz\" (UniqueName: \"kubernetes.io/projected/7e9abb34-c41e-4b86-835c-1107ad5eec49-kube-api-access-tqcdz\") pod \"nova-metadata-0\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.357702 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e9abb34-c41e-4b86-835c-1107ad5eec49-config-data\") pod \"nova-metadata-0\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.369412 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-x25nd"] Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.409397 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.422465 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.422859 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-dns-svc\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.423193 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.423284 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-config\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.423319 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdv6g\" (UniqueName: \"kubernetes.io/projected/55ad6744-8ba2-49c4-bf2c-986f85f40079-kube-api-access-vdv6g\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.527324 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.527408 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-config\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.527442 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdv6g\" (UniqueName: \"kubernetes.io/projected/55ad6744-8ba2-49c4-bf2c-986f85f40079-kube-api-access-vdv6g\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.527481 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.527583 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.528952 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-dns-svc\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.532109 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-dns-svc\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.537288 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.538047 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.542984 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.543448 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.545284 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-config\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.579607 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdv6g\" (UniqueName: \"kubernetes.io/projected/55ad6744-8ba2-49c4-bf2c-986f85f40079-kube-api-access-vdv6g\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.702491 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.837105 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-bqztf"] Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.055978 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:27:22 crc kubenswrapper[5010]: W0203 10:27:22.059884 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddae76c0d_99bf_42f4_8678_5c1693262ecc.slice/crio-6078c7a1e48bd775bca8b987098ebda1a5e82da5d6e8ba44c4019d49bd1f8dd5 WatchSource:0}: Error finding container 6078c7a1e48bd775bca8b987098ebda1a5e82da5d6e8ba44c4019d49bd1f8dd5: Status 404 returned error can't find the container with id 6078c7a1e48bd775bca8b987098ebda1a5e82da5d6e8ba44c4019d49bd1f8dd5 Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.319090 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zwnxk"] Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.322333 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.325693 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.327316 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.373461 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zwnxk"] Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.398096 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.399249 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zwnxk\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.399481 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-scripts\") pod \"nova-cell1-conductor-db-sync-zwnxk\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.399643 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-config-data\") pod \"nova-cell1-conductor-db-sync-zwnxk\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.399840 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4rkj\" (UniqueName: \"kubernetes.io/projected/726ff8cb-3f2f-41a6-a61e-a79ed194505f-kube-api-access-w4rkj\") pod \"nova-cell1-conductor-db-sync-zwnxk\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.459103 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.502012 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zwnxk\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.502111 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-scripts\") pod \"nova-cell1-conductor-db-sync-zwnxk\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.502174 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-config-data\") pod \"nova-cell1-conductor-db-sync-zwnxk\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.502929 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4rkj\" (UniqueName: \"kubernetes.io/projected/726ff8cb-3f2f-41a6-a61e-a79ed194505f-kube-api-access-w4rkj\") pod \"nova-cell1-conductor-db-sync-zwnxk\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.511432 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zwnxk\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.514878 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-scripts\") pod \"nova-cell1-conductor-db-sync-zwnxk\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.516100 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-config-data\") pod \"nova-cell1-conductor-db-sync-zwnxk\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.530331 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4rkj\" (UniqueName: \"kubernetes.io/projected/726ff8cb-3f2f-41a6-a61e-a79ed194505f-kube-api-access-w4rkj\") pod \"nova-cell1-conductor-db-sync-zwnxk\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.593491 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e","Type":"ContainerStarted","Data":"bf460f6ef526dd4f94d755e6904b0e4b071bb805f8064c527674ef4f7512a907"} Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.594385 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.595624 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bqztf" event={"ID":"bd352716-06a1-47da-9d5d-179bfed70cbe","Type":"ContainerStarted","Data":"9df92dcb078ed6d52131766accb050ab09c268253b0a5a65b5f79c4623de44a8"} Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.595660 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bqztf" event={"ID":"bd352716-06a1-47da-9d5d-179bfed70cbe","Type":"ContainerStarted","Data":"2bad36a390bd1a99859cef6466645f1e43e62c5d6ab7ef7aed9fbbdabd1bb08c"} Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.608264 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dae76c0d-99bf-42f4-8678-5c1693262ecc","Type":"ContainerStarted","Data":"6078c7a1e48bd775bca8b987098ebda1a5e82da5d6e8ba44c4019d49bd1f8dd5"} Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.610937 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4df0ad18-8721-40ef-91bc-c609d61f1c1b","Type":"ContainerStarted","Data":"53f9f5ad7c65c9cd148ac8aad3fd34e98580d6dfe75ba51eece28e29be12ce47"} Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.669469 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.731465 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-bqztf" podStartSLOduration=2.731437263 podStartE2EDuration="2.731437263s" podCreationTimestamp="2026-02-03 10:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:27:22.651652134 +0000 UTC m=+1512.807628263" watchObservedRunningTime="2026-02-03 10:27:22.731437263 +0000 UTC m=+1512.887413392" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.844531 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-x25nd"] Feb 03 10:27:23 crc kubenswrapper[5010]: I0203 10:27:23.457309 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zwnxk"] Feb 03 10:27:23 crc kubenswrapper[5010]: W0203 10:27:23.484495 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod726ff8cb_3f2f_41a6_a61e_a79ed194505f.slice/crio-06bc716526af09e9468bec49130055a7e19cac3913d0b3e2ec8f37184dcd4c5b WatchSource:0}: Error finding container 06bc716526af09e9468bec49130055a7e19cac3913d0b3e2ec8f37184dcd4c5b: Status 404 returned error can't find the container with id 06bc716526af09e9468bec49130055a7e19cac3913d0b3e2ec8f37184dcd4c5b Feb 03 10:27:23 crc kubenswrapper[5010]: I0203 10:27:23.641029 5010 generic.go:334] "Generic (PLEG): container finished" podID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerID="4e9bc8f0d6381cd12e012dcf3fe06eb0672b376af0b818c286309997a48dc607" exitCode=137 Feb 03 10:27:23 crc kubenswrapper[5010]: I0203 10:27:23.641164 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cdcd56868-k9h7g" event={"ID":"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b","Type":"ContainerDied","Data":"4e9bc8f0d6381cd12e012dcf3fe06eb0672b376af0b818c286309997a48dc607"} Feb 03 10:27:23 crc kubenswrapper[5010]: I0203 10:27:23.641295 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7cdcd56868-k9h7g" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon-log" containerID="cri-o://d39b7b37971eb5d63b6cabefb740041e4cc9cc6265fc84bc4b6ff52605291d6a" gracePeriod=30 Feb 03 10:27:23 crc kubenswrapper[5010]: I0203 10:27:23.641442 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7cdcd56868-k9h7g" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" containerID="cri-o://ccb768185c1be80c1cf2232c6f15632edb6af133c55f2bd369d8a13606beb3d6" gracePeriod=30 Feb 03 10:27:23 crc kubenswrapper[5010]: I0203 10:27:23.641365 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cdcd56868-k9h7g" event={"ID":"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b","Type":"ContainerStarted","Data":"ccb768185c1be80c1cf2232c6f15632edb6af133c55f2bd369d8a13606beb3d6"} Feb 03 10:27:23 crc kubenswrapper[5010]: I0203 10:27:23.641672 5010 scope.go:117] "RemoveContainer" containerID="2cc2ce22d6ea86e28f6eb264d0d9c9e725b7685d6ab0fd02531064a6b9b028b0" Feb 03 10:27:23 crc kubenswrapper[5010]: I0203 10:27:23.649515 5010 generic.go:334] "Generic (PLEG): container finished" podID="55ad6744-8ba2-49c4-bf2c-986f85f40079" containerID="1947217ed252755389b58ec73dafb5c0c5c7fbd1d7f80b6677ba6a66639adb33" exitCode=0 Feb 03 10:27:23 crc kubenswrapper[5010]: I0203 10:27:23.649620 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" event={"ID":"55ad6744-8ba2-49c4-bf2c-986f85f40079","Type":"ContainerDied","Data":"1947217ed252755389b58ec73dafb5c0c5c7fbd1d7f80b6677ba6a66639adb33"} Feb 03 10:27:23 crc kubenswrapper[5010]: I0203 10:27:23.650523 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" event={"ID":"55ad6744-8ba2-49c4-bf2c-986f85f40079","Type":"ContainerStarted","Data":"7edb2d5b18afc723b6414cab56e64b2430add9e831d1db279a0d0981b7c44bb5"} Feb 03 10:27:23 crc kubenswrapper[5010]: I0203 10:27:23.654286 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e9abb34-c41e-4b86-835c-1107ad5eec49","Type":"ContainerStarted","Data":"5fcbbf7f928cc0dae4b0f264be7c99f38aab374b25b87187f9d00a621247d310"} Feb 03 10:27:23 crc kubenswrapper[5010]: I0203 10:27:23.663074 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zwnxk" event={"ID":"726ff8cb-3f2f-41a6-a61e-a79ed194505f","Type":"ContainerStarted","Data":"06bc716526af09e9468bec49130055a7e19cac3913d0b3e2ec8f37184dcd4c5b"} Feb 03 10:27:24 crc kubenswrapper[5010]: I0203 10:27:24.676711 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zwnxk" event={"ID":"726ff8cb-3f2f-41a6-a61e-a79ed194505f","Type":"ContainerStarted","Data":"9ad6b084a459424fdad0649a5c871c7f22695bf5efe4abdfaf37dff65c794a08"} Feb 03 10:27:24 crc kubenswrapper[5010]: I0203 10:27:24.714748 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-zwnxk" podStartSLOduration=2.714707497 podStartE2EDuration="2.714707497s" podCreationTimestamp="2026-02-03 10:27:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:27:24.700225385 +0000 UTC m=+1514.856201524" watchObservedRunningTime="2026-02-03 10:27:24.714707497 +0000 UTC m=+1514.870683626" Feb 03 10:27:25 crc kubenswrapper[5010]: I0203 10:27:25.208305 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:25 crc kubenswrapper[5010]: I0203 10:27:25.221097 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 10:27:27 crc kubenswrapper[5010]: I0203 10:27:27.717272 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4df0ad18-8721-40ef-91bc-c609d61f1c1b","Type":"ContainerStarted","Data":"ae9cd98547d8fff1706d863c1e8f43d79f4ce19a78307424e4a816129ff20e12"} Feb 03 10:27:27 crc kubenswrapper[5010]: I0203 10:27:27.719951 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e","Type":"ContainerStarted","Data":"fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d"} Feb 03 10:27:27 crc kubenswrapper[5010]: I0203 10:27:27.717519 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="4df0ad18-8721-40ef-91bc-c609d61f1c1b" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://ae9cd98547d8fff1706d863c1e8f43d79f4ce19a78307424e4a816129ff20e12" gracePeriod=30 Feb 03 10:27:27 crc kubenswrapper[5010]: I0203 10:27:27.734934 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dae76c0d-99bf-42f4-8678-5c1693262ecc","Type":"ContainerStarted","Data":"241c9e9f88442e26f4c60b5bf7f593615d35fb056df34c097b437a3289e1ed1e"} Feb 03 10:27:27 crc kubenswrapper[5010]: I0203 10:27:27.746949 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.095624143 podStartE2EDuration="7.746918345s" podCreationTimestamp="2026-02-03 10:27:20 +0000 UTC" firstStartedPulling="2026-02-03 10:27:22.386652839 +0000 UTC m=+1512.542628958" lastFinishedPulling="2026-02-03 10:27:27.037947031 +0000 UTC m=+1517.193923160" observedRunningTime="2026-02-03 10:27:27.741955978 +0000 UTC m=+1517.897932117" watchObservedRunningTime="2026-02-03 10:27:27.746918345 +0000 UTC m=+1517.902894474" Feb 03 10:27:27 crc kubenswrapper[5010]: I0203 10:27:27.755741 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" event={"ID":"55ad6744-8ba2-49c4-bf2c-986f85f40079","Type":"ContainerStarted","Data":"023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74"} Feb 03 10:27:27 crc kubenswrapper[5010]: I0203 10:27:27.756195 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:27 crc kubenswrapper[5010]: I0203 10:27:27.760546 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e9abb34-c41e-4b86-835c-1107ad5eec49","Type":"ContainerStarted","Data":"30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303"} Feb 03 10:27:27 crc kubenswrapper[5010]: I0203 10:27:27.776907 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.111559852 podStartE2EDuration="7.776877744s" podCreationTimestamp="2026-02-03 10:27:20 +0000 UTC" firstStartedPulling="2026-02-03 10:27:22.369565631 +0000 UTC m=+1512.525541760" lastFinishedPulling="2026-02-03 10:27:27.034883523 +0000 UTC m=+1517.190859652" observedRunningTime="2026-02-03 10:27:27.766243721 +0000 UTC m=+1517.922219850" watchObservedRunningTime="2026-02-03 10:27:27.776877744 +0000 UTC m=+1517.932853873" Feb 03 10:27:28 crc kubenswrapper[5010]: I0203 10:27:28.775524 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dae76c0d-99bf-42f4-8678-5c1693262ecc","Type":"ContainerStarted","Data":"c99bed3bf87dd9576980ecaf735b0a2713f9773f5d114b1af04d87bd2cd7c5e6"} Feb 03 10:27:28 crc kubenswrapper[5010]: I0203 10:27:28.782861 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7e9abb34-c41e-4b86-835c-1107ad5eec49" containerName="nova-metadata-log" containerID="cri-o://30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303" gracePeriod=30 Feb 03 10:27:28 crc kubenswrapper[5010]: I0203 10:27:28.783213 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e9abb34-c41e-4b86-835c-1107ad5eec49","Type":"ContainerStarted","Data":"3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0"} Feb 03 10:27:28 crc kubenswrapper[5010]: I0203 10:27:28.783816 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7e9abb34-c41e-4b86-835c-1107ad5eec49" containerName="nova-metadata-metadata" containerID="cri-o://3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0" gracePeriod=30 Feb 03 10:27:28 crc kubenswrapper[5010]: I0203 10:27:28.807903 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" podStartSLOduration=7.807877278 podStartE2EDuration="7.807877278s" podCreationTimestamp="2026-02-03 10:27:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:27:27.793679616 +0000 UTC m=+1517.949655745" watchObservedRunningTime="2026-02-03 10:27:28.807877278 +0000 UTC m=+1518.963853407" Feb 03 10:27:28 crc kubenswrapper[5010]: I0203 10:27:28.813580 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.836462235 podStartE2EDuration="8.813553663s" podCreationTimestamp="2026-02-03 10:27:20 +0000 UTC" firstStartedPulling="2026-02-03 10:27:22.072744219 +0000 UTC m=+1512.228720348" lastFinishedPulling="2026-02-03 10:27:27.049835647 +0000 UTC m=+1517.205811776" observedRunningTime="2026-02-03 10:27:28.803972847 +0000 UTC m=+1518.959948976" watchObservedRunningTime="2026-02-03 10:27:28.813553663 +0000 UTC m=+1518.969529792" Feb 03 10:27:28 crc kubenswrapper[5010]: I0203 10:27:28.846812 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.406280887 podStartE2EDuration="7.846785957s" podCreationTimestamp="2026-02-03 10:27:21 +0000 UTC" firstStartedPulling="2026-02-03 10:27:22.594380853 +0000 UTC m=+1512.750356982" lastFinishedPulling="2026-02-03 10:27:27.034885923 +0000 UTC m=+1517.190862052" observedRunningTime="2026-02-03 10:27:28.841361577 +0000 UTC m=+1518.997337706" watchObservedRunningTime="2026-02-03 10:27:28.846785957 +0000 UTC m=+1519.002762086" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.452812 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.553116 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqcdz\" (UniqueName: \"kubernetes.io/projected/7e9abb34-c41e-4b86-835c-1107ad5eec49-kube-api-access-tqcdz\") pod \"7e9abb34-c41e-4b86-835c-1107ad5eec49\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.553324 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e9abb34-c41e-4b86-835c-1107ad5eec49-combined-ca-bundle\") pod \"7e9abb34-c41e-4b86-835c-1107ad5eec49\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.553488 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e9abb34-c41e-4b86-835c-1107ad5eec49-config-data\") pod \"7e9abb34-c41e-4b86-835c-1107ad5eec49\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.553535 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e9abb34-c41e-4b86-835c-1107ad5eec49-logs\") pod \"7e9abb34-c41e-4b86-835c-1107ad5eec49\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.553858 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e9abb34-c41e-4b86-835c-1107ad5eec49-logs" (OuterVolumeSpecName: "logs") pod "7e9abb34-c41e-4b86-835c-1107ad5eec49" (UID: "7e9abb34-c41e-4b86-835c-1107ad5eec49"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.555673 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e9abb34-c41e-4b86-835c-1107ad5eec49-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.564997 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e9abb34-c41e-4b86-835c-1107ad5eec49-kube-api-access-tqcdz" (OuterVolumeSpecName: "kube-api-access-tqcdz") pod "7e9abb34-c41e-4b86-835c-1107ad5eec49" (UID: "7e9abb34-c41e-4b86-835c-1107ad5eec49"). InnerVolumeSpecName "kube-api-access-tqcdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.610413 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e9abb34-c41e-4b86-835c-1107ad5eec49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e9abb34-c41e-4b86-835c-1107ad5eec49" (UID: "7e9abb34-c41e-4b86-835c-1107ad5eec49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.616437 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e9abb34-c41e-4b86-835c-1107ad5eec49-config-data" (OuterVolumeSpecName: "config-data") pod "7e9abb34-c41e-4b86-835c-1107ad5eec49" (UID: "7e9abb34-c41e-4b86-835c-1107ad5eec49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.658402 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e9abb34-c41e-4b86-835c-1107ad5eec49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.658453 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e9abb34-c41e-4b86-835c-1107ad5eec49-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.658467 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqcdz\" (UniqueName: \"kubernetes.io/projected/7e9abb34-c41e-4b86-835c-1107ad5eec49-kube-api-access-tqcdz\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.801512 5010 generic.go:334] "Generic (PLEG): container finished" podID="7e9abb34-c41e-4b86-835c-1107ad5eec49" containerID="3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0" exitCode=0 Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.801552 5010 generic.go:334] "Generic (PLEG): container finished" podID="7e9abb34-c41e-4b86-835c-1107ad5eec49" containerID="30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303" exitCode=143 Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.802606 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e9abb34-c41e-4b86-835c-1107ad5eec49","Type":"ContainerDied","Data":"3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0"} Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.802652 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.802691 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e9abb34-c41e-4b86-835c-1107ad5eec49","Type":"ContainerDied","Data":"30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303"} Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.802710 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e9abb34-c41e-4b86-835c-1107ad5eec49","Type":"ContainerDied","Data":"5fcbbf7f928cc0dae4b0f264be7c99f38aab374b25b87187f9d00a621247d310"} Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.802733 5010 scope.go:117] "RemoveContainer" containerID="3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.839591 5010 scope.go:117] "RemoveContainer" containerID="30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.865534 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.889205 5010 scope.go:117] "RemoveContainer" containerID="3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0" Feb 03 10:27:29 crc kubenswrapper[5010]: E0203 10:27:29.889803 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0\": container with ID starting with 3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0 not found: ID does not exist" containerID="3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.889838 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0"} err="failed to get container status \"3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0\": rpc error: code = NotFound desc = could not find container \"3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0\": container with ID starting with 3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0 not found: ID does not exist" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.889864 5010 scope.go:117] "RemoveContainer" containerID="30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303" Feb 03 10:27:29 crc kubenswrapper[5010]: E0203 10:27:29.891588 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303\": container with ID starting with 30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303 not found: ID does not exist" containerID="30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.891630 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303"} err="failed to get container status \"30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303\": rpc error: code = NotFound desc = could not find container \"30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303\": container with ID starting with 30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303 not found: ID does not exist" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.891653 5010 scope.go:117] "RemoveContainer" containerID="3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.892547 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0"} err="failed to get container status \"3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0\": rpc error: code = NotFound desc = could not find container \"3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0\": container with ID starting with 3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0 not found: ID does not exist" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.892594 5010 scope.go:117] "RemoveContainer" containerID="30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.894027 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303"} err="failed to get container status \"30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303\": rpc error: code = NotFound desc = could not find container \"30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303\": container with ID starting with 30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303 not found: ID does not exist" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.899771 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.910968 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:29 crc kubenswrapper[5010]: E0203 10:27:29.911941 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e9abb34-c41e-4b86-835c-1107ad5eec49" containerName="nova-metadata-log" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.911988 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e9abb34-c41e-4b86-835c-1107ad5eec49" containerName="nova-metadata-log" Feb 03 10:27:29 crc kubenswrapper[5010]: E0203 10:27:29.912062 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e9abb34-c41e-4b86-835c-1107ad5eec49" containerName="nova-metadata-metadata" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.912075 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e9abb34-c41e-4b86-835c-1107ad5eec49" containerName="nova-metadata-metadata" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.912364 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e9abb34-c41e-4b86-835c-1107ad5eec49" containerName="nova-metadata-log" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.912396 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e9abb34-c41e-4b86-835c-1107ad5eec49" containerName="nova-metadata-metadata" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.914581 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.918237 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.918334 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.925527 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.964457 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.964515 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-config-data\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.964559 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-logs\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.964603 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddtdd\" (UniqueName: \"kubernetes.io/projected/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-kube-api-access-ddtdd\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.964642 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.065866 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddtdd\" (UniqueName: \"kubernetes.io/projected/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-kube-api-access-ddtdd\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.065942 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.066067 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.066107 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-config-data\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.066150 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-logs\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.066604 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-logs\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.070068 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.071839 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-config-data\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.077040 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.084239 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddtdd\" (UniqueName: \"kubernetes.io/projected/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-kube-api-access-ddtdd\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.249651 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.527098 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e9abb34-c41e-4b86-835c-1107ad5eec49" path="/var/lib/kubelet/pods/7e9abb34-c41e-4b86-835c-1107ad5eec49/volumes" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.992539 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:31 crc kubenswrapper[5010]: I0203 10:27:31.061473 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 10:27:31 crc kubenswrapper[5010]: I0203 10:27:31.061551 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 10:27:31 crc kubenswrapper[5010]: I0203 10:27:31.242434 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 03 10:27:31 crc kubenswrapper[5010]: I0203 10:27:31.242519 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 03 10:27:31 crc kubenswrapper[5010]: I0203 10:27:31.248407 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:31 crc kubenswrapper[5010]: I0203 10:27:31.305375 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 03 10:27:31 crc kubenswrapper[5010]: I0203 10:27:31.829704 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e","Type":"ContainerStarted","Data":"add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2"} Feb 03 10:27:31 crc kubenswrapper[5010]: I0203 10:27:31.830180 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e","Type":"ContainerStarted","Data":"62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8"} Feb 03 10:27:31 crc kubenswrapper[5010]: I0203 10:27:31.830199 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e","Type":"ContainerStarted","Data":"d287adc54325882a622782a3232f723bb21563ecbced55297361e7dc2d758abc"} Feb 03 10:27:31 crc kubenswrapper[5010]: I0203 10:27:31.866763 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.866639748 podStartE2EDuration="2.866639748s" podCreationTimestamp="2026-02-03 10:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:27:31.858387846 +0000 UTC m=+1522.014363975" watchObservedRunningTime="2026-02-03 10:27:31.866639748 +0000 UTC m=+1522.022615887" Feb 03 10:27:31 crc kubenswrapper[5010]: I0203 10:27:31.895600 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 03 10:27:32 crc kubenswrapper[5010]: I0203 10:27:32.148545 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dae76c0d-99bf-42f4-8678-5c1693262ecc" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 10:27:32 crc kubenswrapper[5010]: I0203 10:27:32.148739 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dae76c0d-99bf-42f4-8678-5c1693262ecc" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 10:27:32 crc kubenswrapper[5010]: I0203 10:27:32.804208 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:27:35 crc kubenswrapper[5010]: I0203 10:27:35.250933 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 10:27:35 crc kubenswrapper[5010]: I0203 10:27:35.251455 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 10:27:35 crc kubenswrapper[5010]: I0203 10:27:35.877857 5010 generic.go:334] "Generic (PLEG): container finished" podID="726ff8cb-3f2f-41a6-a61e-a79ed194505f" containerID="9ad6b084a459424fdad0649a5c871c7f22695bf5efe4abdfaf37dff65c794a08" exitCode=0 Feb 03 10:27:35 crc kubenswrapper[5010]: I0203 10:27:35.877935 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zwnxk" event={"ID":"726ff8cb-3f2f-41a6-a61e-a79ed194505f","Type":"ContainerDied","Data":"9ad6b084a459424fdad0649a5c871c7f22695bf5efe4abdfaf37dff65c794a08"} Feb 03 10:27:35 crc kubenswrapper[5010]: I0203 10:27:35.880342 5010 generic.go:334] "Generic (PLEG): container finished" podID="bd352716-06a1-47da-9d5d-179bfed70cbe" containerID="9df92dcb078ed6d52131766accb050ab09c268253b0a5a65b5f79c4623de44a8" exitCode=0 Feb 03 10:27:35 crc kubenswrapper[5010]: I0203 10:27:35.880392 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bqztf" event={"ID":"bd352716-06a1-47da-9d5d-179bfed70cbe","Type":"ContainerDied","Data":"9df92dcb078ed6d52131766accb050ab09c268253b0a5a65b5f79c4623de44a8"} Feb 03 10:27:36 crc kubenswrapper[5010]: I0203 10:27:36.712580 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:36 crc kubenswrapper[5010]: I0203 10:27:36.837402 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6vbfz"] Feb 03 10:27:36 crc kubenswrapper[5010]: I0203 10:27:36.839318 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" podUID="b88c8b02-54df-4761-acc8-c959005f4444" containerName="dnsmasq-dns" containerID="cri-o://fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77" gracePeriod=10 Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.606801 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.722736 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4rkj\" (UniqueName: \"kubernetes.io/projected/726ff8cb-3f2f-41a6-a61e-a79ed194505f-kube-api-access-w4rkj\") pod \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.722801 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-scripts\") pod \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.722834 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-combined-ca-bundle\") pod \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.722905 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-config-data\") pod \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.731794 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-scripts" (OuterVolumeSpecName: "scripts") pod "726ff8cb-3f2f-41a6-a61e-a79ed194505f" (UID: "726ff8cb-3f2f-41a6-a61e-a79ed194505f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.732636 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.733573 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/726ff8cb-3f2f-41a6-a61e-a79ed194505f-kube-api-access-w4rkj" (OuterVolumeSpecName: "kube-api-access-w4rkj") pod "726ff8cb-3f2f-41a6-a61e-a79ed194505f" (UID: "726ff8cb-3f2f-41a6-a61e-a79ed194505f"). InnerVolumeSpecName "kube-api-access-w4rkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.745878 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.765613 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-config-data" (OuterVolumeSpecName: "config-data") pod "726ff8cb-3f2f-41a6-a61e-a79ed194505f" (UID: "726ff8cb-3f2f-41a6-a61e-a79ed194505f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.793416 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "726ff8cb-3f2f-41a6-a61e-a79ed194505f" (UID: "726ff8cb-3f2f-41a6-a61e-a79ed194505f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.825135 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-combined-ca-bundle\") pod \"bd352716-06a1-47da-9d5d-179bfed70cbe\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.825258 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjhgc\" (UniqueName: \"kubernetes.io/projected/bd352716-06a1-47da-9d5d-179bfed70cbe-kube-api-access-jjhgc\") pod \"bd352716-06a1-47da-9d5d-179bfed70cbe\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.825352 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-config-data\") pod \"bd352716-06a1-47da-9d5d-179bfed70cbe\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.825507 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-scripts\") pod \"bd352716-06a1-47da-9d5d-179bfed70cbe\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.826188 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4rkj\" (UniqueName: \"kubernetes.io/projected/726ff8cb-3f2f-41a6-a61e-a79ed194505f-kube-api-access-w4rkj\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.826211 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.826235 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.826245 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.832186 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-scripts" (OuterVolumeSpecName: "scripts") pod "bd352716-06a1-47da-9d5d-179bfed70cbe" (UID: "bd352716-06a1-47da-9d5d-179bfed70cbe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.832869 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd352716-06a1-47da-9d5d-179bfed70cbe-kube-api-access-jjhgc" (OuterVolumeSpecName: "kube-api-access-jjhgc") pod "bd352716-06a1-47da-9d5d-179bfed70cbe" (UID: "bd352716-06a1-47da-9d5d-179bfed70cbe"). InnerVolumeSpecName "kube-api-access-jjhgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.886021 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-config-data" (OuterVolumeSpecName: "config-data") pod "bd352716-06a1-47da-9d5d-179bfed70cbe" (UID: "bd352716-06a1-47da-9d5d-179bfed70cbe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.886081 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd352716-06a1-47da-9d5d-179bfed70cbe" (UID: "bd352716-06a1-47da-9d5d-179bfed70cbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.927739 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-ovsdbserver-sb\") pod \"b88c8b02-54df-4761-acc8-c959005f4444\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.928035 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-dns-svc\") pod \"b88c8b02-54df-4761-acc8-c959005f4444\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.928083 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-ovsdbserver-nb\") pod \"b88c8b02-54df-4761-acc8-c959005f4444\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.928209 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8w9d\" (UniqueName: \"kubernetes.io/projected/b88c8b02-54df-4761-acc8-c959005f4444-kube-api-access-d8w9d\") pod \"b88c8b02-54df-4761-acc8-c959005f4444\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.928330 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-dns-swift-storage-0\") pod \"b88c8b02-54df-4761-acc8-c959005f4444\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.928469 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-config\") pod \"b88c8b02-54df-4761-acc8-c959005f4444\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.929031 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.929048 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.929062 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjhgc\" (UniqueName: \"kubernetes.io/projected/bd352716-06a1-47da-9d5d-179bfed70cbe-kube-api-access-jjhgc\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.929071 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.933597 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b88c8b02-54df-4761-acc8-c959005f4444-kube-api-access-d8w9d" (OuterVolumeSpecName: "kube-api-access-d8w9d") pod "b88c8b02-54df-4761-acc8-c959005f4444" (UID: "b88c8b02-54df-4761-acc8-c959005f4444"). InnerVolumeSpecName "kube-api-access-d8w9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.945584 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.945742 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zwnxk" event={"ID":"726ff8cb-3f2f-41a6-a61e-a79ed194505f","Type":"ContainerDied","Data":"06bc716526af09e9468bec49130055a7e19cac3913d0b3e2ec8f37184dcd4c5b"} Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.945820 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06bc716526af09e9468bec49130055a7e19cac3913d0b3e2ec8f37184dcd4c5b" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.978511 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bqztf" event={"ID":"bd352716-06a1-47da-9d5d-179bfed70cbe","Type":"ContainerDied","Data":"2bad36a390bd1a99859cef6466645f1e43e62c5d6ab7ef7aed9fbbdabd1bb08c"} Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.978594 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bad36a390bd1a99859cef6466645f1e43e62c5d6ab7ef7aed9fbbdabd1bb08c" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.978729 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.994327 5010 generic.go:334] "Generic (PLEG): container finished" podID="b88c8b02-54df-4761-acc8-c959005f4444" containerID="fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77" exitCode=0 Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.994406 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" event={"ID":"b88c8b02-54df-4761-acc8-c959005f4444","Type":"ContainerDied","Data":"fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77"} Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.994461 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" event={"ID":"b88c8b02-54df-4761-acc8-c959005f4444","Type":"ContainerDied","Data":"2d51e4ddd011d0ec5a5a6ac940b6dc440f8c2ebbdfedfd082c8cf295f749780f"} Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.994493 5010 scope.go:117] "RemoveContainer" containerID="fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.994718 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.013274 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b88c8b02-54df-4761-acc8-c959005f4444" (UID: "b88c8b02-54df-4761-acc8-c959005f4444"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.032139 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8w9d\" (UniqueName: \"kubernetes.io/projected/b88c8b02-54df-4761-acc8-c959005f4444-kube-api-access-d8w9d\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.032198 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.072625 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-config" (OuterVolumeSpecName: "config") pod "b88c8b02-54df-4761-acc8-c959005f4444" (UID: "b88c8b02-54df-4761-acc8-c959005f4444"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.080794 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b88c8b02-54df-4761-acc8-c959005f4444" (UID: "b88c8b02-54df-4761-acc8-c959005f4444"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.122591 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b88c8b02-54df-4761-acc8-c959005f4444" (UID: "b88c8b02-54df-4761-acc8-c959005f4444"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.125954 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 03 10:27:38 crc kubenswrapper[5010]: E0203 10:27:38.126673 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88c8b02-54df-4761-acc8-c959005f4444" containerName="dnsmasq-dns" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.126698 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88c8b02-54df-4761-acc8-c959005f4444" containerName="dnsmasq-dns" Feb 03 10:27:38 crc kubenswrapper[5010]: E0203 10:27:38.126741 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="726ff8cb-3f2f-41a6-a61e-a79ed194505f" containerName="nova-cell1-conductor-db-sync" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.126749 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="726ff8cb-3f2f-41a6-a61e-a79ed194505f" containerName="nova-cell1-conductor-db-sync" Feb 03 10:27:38 crc kubenswrapper[5010]: E0203 10:27:38.126769 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88c8b02-54df-4761-acc8-c959005f4444" containerName="init" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.126776 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88c8b02-54df-4761-acc8-c959005f4444" containerName="init" Feb 03 10:27:38 crc kubenswrapper[5010]: E0203 10:27:38.126804 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd352716-06a1-47da-9d5d-179bfed70cbe" containerName="nova-manage" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.126811 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd352716-06a1-47da-9d5d-179bfed70cbe" containerName="nova-manage" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.127069 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="726ff8cb-3f2f-41a6-a61e-a79ed194505f" containerName="nova-cell1-conductor-db-sync" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.127090 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd352716-06a1-47da-9d5d-179bfed70cbe" containerName="nova-manage" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.127100 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="b88c8b02-54df-4761-acc8-c959005f4444" containerName="dnsmasq-dns" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.128024 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.135834 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.135875 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.135888 5010 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.138155 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.145990 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.190414 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b88c8b02-54df-4761-acc8-c959005f4444" (UID: "b88c8b02-54df-4761-acc8-c959005f4444"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.226382 5010 scope.go:117] "RemoveContainer" containerID="49ff5a76d40c8d3740c82b06df88f2bec310e05f57c31efe76c162d534248c50" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.237938 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/291a9878-85fe-4988-8a7d-1da10ac49b23-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"291a9878-85fe-4988-8a7d-1da10ac49b23\") " pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.238074 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kcxg\" (UniqueName: \"kubernetes.io/projected/291a9878-85fe-4988-8a7d-1da10ac49b23-kube-api-access-8kcxg\") pod \"nova-cell1-conductor-0\" (UID: \"291a9878-85fe-4988-8a7d-1da10ac49b23\") " pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.238152 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291a9878-85fe-4988-8a7d-1da10ac49b23-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"291a9878-85fe-4988-8a7d-1da10ac49b23\") " pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.238363 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.240128 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.240603 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dae76c0d-99bf-42f4-8678-5c1693262ecc" containerName="nova-api-log" containerID="cri-o://241c9e9f88442e26f4c60b5bf7f593615d35fb056df34c097b437a3289e1ed1e" gracePeriod=30 Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.241048 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dae76c0d-99bf-42f4-8678-5c1693262ecc" containerName="nova-api-api" containerID="cri-o://c99bed3bf87dd9576980ecaf735b0a2713f9773f5d114b1af04d87bd2cd7c5e6" gracePeriod=30 Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.367609 5010 scope.go:117] "RemoveContainer" containerID="fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77" Feb 03 10:27:38 crc kubenswrapper[5010]: E0203 10:27:38.368781 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77\": container with ID starting with fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77 not found: ID does not exist" containerID="fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.368901 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77"} err="failed to get container status \"fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77\": rpc error: code = NotFound desc = could not find container \"fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77\": container with ID starting with fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77 not found: ID does not exist" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.369014 5010 scope.go:117] "RemoveContainer" containerID="49ff5a76d40c8d3740c82b06df88f2bec310e05f57c31efe76c162d534248c50" Feb 03 10:27:38 crc kubenswrapper[5010]: E0203 10:27:38.369361 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49ff5a76d40c8d3740c82b06df88f2bec310e05f57c31efe76c162d534248c50\": container with ID starting with 49ff5a76d40c8d3740c82b06df88f2bec310e05f57c31efe76c162d534248c50 not found: ID does not exist" containerID="49ff5a76d40c8d3740c82b06df88f2bec310e05f57c31efe76c162d534248c50" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.369505 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49ff5a76d40c8d3740c82b06df88f2bec310e05f57c31efe76c162d534248c50"} err="failed to get container status \"49ff5a76d40c8d3740c82b06df88f2bec310e05f57c31efe76c162d534248c50\": rpc error: code = NotFound desc = could not find container \"49ff5a76d40c8d3740c82b06df88f2bec310e05f57c31efe76c162d534248c50\": container with ID starting with 49ff5a76d40c8d3740c82b06df88f2bec310e05f57c31efe76c162d534248c50 not found: ID does not exist" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.390958 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.391349 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3d95db89-dc92-4f4e-9371-a9dfcf2eb54e" containerName="nova-scheduler-scheduler" containerID="cri-o://fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d" gracePeriod=30 Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.395613 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kcxg\" (UniqueName: \"kubernetes.io/projected/291a9878-85fe-4988-8a7d-1da10ac49b23-kube-api-access-8kcxg\") pod \"nova-cell1-conductor-0\" (UID: \"291a9878-85fe-4988-8a7d-1da10ac49b23\") " pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.395830 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291a9878-85fe-4988-8a7d-1da10ac49b23-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"291a9878-85fe-4988-8a7d-1da10ac49b23\") " pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.396307 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/291a9878-85fe-4988-8a7d-1da10ac49b23-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"291a9878-85fe-4988-8a7d-1da10ac49b23\") " pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.408582 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/291a9878-85fe-4988-8a7d-1da10ac49b23-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"291a9878-85fe-4988-8a7d-1da10ac49b23\") " pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.413179 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291a9878-85fe-4988-8a7d-1da10ac49b23-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"291a9878-85fe-4988-8a7d-1da10ac49b23\") " pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.478435 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.478779 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" containerName="nova-metadata-log" containerID="cri-o://62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8" gracePeriod=30 Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.479249 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kcxg\" (UniqueName: \"kubernetes.io/projected/291a9878-85fe-4988-8a7d-1da10ac49b23-kube-api-access-8kcxg\") pod \"nova-cell1-conductor-0\" (UID: \"291a9878-85fe-4988-8a7d-1da10ac49b23\") " pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.479440 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" containerName="nova-metadata-metadata" containerID="cri-o://add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2" gracePeriod=30 Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.529903 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6vbfz"] Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.530293 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6vbfz"] Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.670085 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.016204 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.058078 5010 generic.go:334] "Generic (PLEG): container finished" podID="9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" containerID="62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8" exitCode=143 Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.058281 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e","Type":"ContainerDied","Data":"62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8"} Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.075803 5010 generic.go:334] "Generic (PLEG): container finished" podID="dae76c0d-99bf-42f4-8678-5c1693262ecc" containerID="241c9e9f88442e26f4c60b5bf7f593615d35fb056df34c097b437a3289e1ed1e" exitCode=143 Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.075975 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dae76c0d-99bf-42f4-8678-5c1693262ecc","Type":"ContainerDied","Data":"241c9e9f88442e26f4c60b5bf7f593615d35fb056df34c097b437a3289e1ed1e"} Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.402232 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.828800 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.940101 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddtdd\" (UniqueName: \"kubernetes.io/projected/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-kube-api-access-ddtdd\") pod \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.940838 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-nova-metadata-tls-certs\") pod \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.941085 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-config-data\") pod \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.941428 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-combined-ca-bundle\") pod \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.941474 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-logs\") pod \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.942026 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-logs" (OuterVolumeSpecName: "logs") pod "9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" (UID: "9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.949481 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-kube-api-access-ddtdd" (OuterVolumeSpecName: "kube-api-access-ddtdd") pod "9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" (UID: "9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e"). InnerVolumeSpecName "kube-api-access-ddtdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.989878 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-config-data" (OuterVolumeSpecName: "config-data") pod "9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" (UID: "9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.002356 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" (UID: "9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.017096 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" (UID: "9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.044883 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.044940 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.044954 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddtdd\" (UniqueName: \"kubernetes.io/projected/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-kube-api-access-ddtdd\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.044970 5010 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.044985 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.093674 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"291a9878-85fe-4988-8a7d-1da10ac49b23","Type":"ContainerStarted","Data":"da94971cc58ba2c42c3ad1836afff46400802415777abe34ceadccb5855776c3"} Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.093743 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"291a9878-85fe-4988-8a7d-1da10ac49b23","Type":"ContainerStarted","Data":"85cecab4f6c9af2519d22c0f5ca34ce44fde0330b3c97d2f01236561ec50ec88"} Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.093771 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.096730 5010 generic.go:334] "Generic (PLEG): container finished" podID="9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" containerID="add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2" exitCode=0 Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.096778 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.096824 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e","Type":"ContainerDied","Data":"add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2"} Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.096874 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e","Type":"ContainerDied","Data":"d287adc54325882a622782a3232f723bb21563ecbced55297361e7dc2d758abc"} Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.096901 5010 scope.go:117] "RemoveContainer" containerID="add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.126395 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.126352433 podStartE2EDuration="2.126352433s" podCreationTimestamp="2026-02-03 10:27:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:27:40.115849704 +0000 UTC m=+1530.271825853" watchObservedRunningTime="2026-02-03 10:27:40.126352433 +0000 UTC m=+1530.282328582" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.133591 5010 scope.go:117] "RemoveContainer" containerID="62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.162187 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.178791 5010 scope.go:117] "RemoveContainer" containerID="add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.183683 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:40 crc kubenswrapper[5010]: E0203 10:27:40.185516 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2\": container with ID starting with add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2 not found: ID does not exist" containerID="add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.185616 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2"} err="failed to get container status \"add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2\": rpc error: code = NotFound desc = could not find container \"add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2\": container with ID starting with add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2 not found: ID does not exist" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.185667 5010 scope.go:117] "RemoveContainer" containerID="62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8" Feb 03 10:27:40 crc kubenswrapper[5010]: E0203 10:27:40.189828 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8\": container with ID starting with 62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8 not found: ID does not exist" containerID="62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.189905 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8"} err="failed to get container status \"62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8\": rpc error: code = NotFound desc = could not find container \"62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8\": container with ID starting with 62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8 not found: ID does not exist" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.201361 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:40 crc kubenswrapper[5010]: E0203 10:27:40.202233 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" containerName="nova-metadata-metadata" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.202266 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" containerName="nova-metadata-metadata" Feb 03 10:27:40 crc kubenswrapper[5010]: E0203 10:27:40.202396 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" containerName="nova-metadata-log" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.202409 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" containerName="nova-metadata-log" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.202705 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" containerName="nova-metadata-log" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.202753 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" containerName="nova-metadata-metadata" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.204441 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.209935 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.210242 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.217982 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.256187 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-config-data\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.256327 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.256364 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bnxb\" (UniqueName: \"kubernetes.io/projected/4c43ac79-0458-4b95-a9fd-26bc038c195b-kube-api-access-9bnxb\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.256392 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c43ac79-0458-4b95-a9fd-26bc038c195b-logs\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.256536 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.358656 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.358816 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-config-data\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.358872 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.358899 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c43ac79-0458-4b95-a9fd-26bc038c195b-logs\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.358922 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bnxb\" (UniqueName: \"kubernetes.io/projected/4c43ac79-0458-4b95-a9fd-26bc038c195b-kube-api-access-9bnxb\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.360241 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c43ac79-0458-4b95-a9fd-26bc038c195b-logs\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.366010 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.366798 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.367119 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-config-data\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.380296 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bnxb\" (UniqueName: \"kubernetes.io/projected/4c43ac79-0458-4b95-a9fd-26bc038c195b-kube-api-access-9bnxb\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.517791 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" path="/var/lib/kubelet/pods/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e/volumes" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.518531 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b88c8b02-54df-4761-acc8-c959005f4444" path="/var/lib/kubelet/pods/b88c8b02-54df-4761-acc8-c959005f4444/volumes" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.552601 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:27:41 crc kubenswrapper[5010]: I0203 10:27:41.116584 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:41 crc kubenswrapper[5010]: W0203 10:27:41.141703 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c43ac79_0458_4b95_a9fd_26bc038c195b.slice/crio-d8c29f4fa62c3f6d24562331b8a0ba99f0c35f78468e992ff282bcdb95f55c82 WatchSource:0}: Error finding container d8c29f4fa62c3f6d24562331b8a0ba99f0c35f78468e992ff282bcdb95f55c82: Status 404 returned error can't find the container with id d8c29f4fa62c3f6d24562331b8a0ba99f0c35f78468e992ff282bcdb95f55c82 Feb 03 10:27:41 crc kubenswrapper[5010]: E0203 10:27:41.239450 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d is running failed: container process not found" containerID="fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 10:27:41 crc kubenswrapper[5010]: E0203 10:27:41.249415 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d is running failed: container process not found" containerID="fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 10:27:41 crc kubenswrapper[5010]: E0203 10:27:41.252378 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d is running failed: container process not found" containerID="fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 10:27:41 crc kubenswrapper[5010]: E0203 10:27:41.252503 5010 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3d95db89-dc92-4f4e-9371-a9dfcf2eb54e" containerName="nova-scheduler-scheduler" Feb 03 10:27:41 crc kubenswrapper[5010]: I0203 10:27:41.690961 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 10:27:41 crc kubenswrapper[5010]: I0203 10:27:41.836271 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srb2s\" (UniqueName: \"kubernetes.io/projected/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-kube-api-access-srb2s\") pod \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\" (UID: \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\") " Feb 03 10:27:41 crc kubenswrapper[5010]: I0203 10:27:41.836447 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-combined-ca-bundle\") pod \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\" (UID: \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\") " Feb 03 10:27:41 crc kubenswrapper[5010]: I0203 10:27:41.836648 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-config-data\") pod \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\" (UID: \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\") " Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.096725 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-kube-api-access-srb2s" (OuterVolumeSpecName: "kube-api-access-srb2s") pod "3d95db89-dc92-4f4e-9371-a9dfcf2eb54e" (UID: "3d95db89-dc92-4f4e-9371-a9dfcf2eb54e"). InnerVolumeSpecName "kube-api-access-srb2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.122934 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srb2s\" (UniqueName: \"kubernetes.io/projected/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-kube-api-access-srb2s\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.147319 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-config-data" (OuterVolumeSpecName: "config-data") pod "3d95db89-dc92-4f4e-9371-a9dfcf2eb54e" (UID: "3d95db89-dc92-4f4e-9371-a9dfcf2eb54e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.217972 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c43ac79-0458-4b95-a9fd-26bc038c195b","Type":"ContainerStarted","Data":"70f58e247699be77808ee32bd051173d13561654851dcea2d20478da52e6150e"} Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.218072 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c43ac79-0458-4b95-a9fd-26bc038c195b","Type":"ContainerStarted","Data":"d8c29f4fa62c3f6d24562331b8a0ba99f0c35f78468e992ff282bcdb95f55c82"} Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.225313 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.239518 5010 generic.go:334] "Generic (PLEG): container finished" podID="3d95db89-dc92-4f4e-9371-a9dfcf2eb54e" containerID="fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d" exitCode=0 Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.239596 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e","Type":"ContainerDied","Data":"fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d"} Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.239642 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e","Type":"ContainerDied","Data":"bf460f6ef526dd4f94d755e6904b0e4b071bb805f8064c527674ef4f7512a907"} Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.239665 5010 scope.go:117] "RemoveContainer" containerID="fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.239918 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.358901 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d95db89-dc92-4f4e-9371-a9dfcf2eb54e" (UID: "3d95db89-dc92-4f4e-9371-a9dfcf2eb54e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.431164 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.454954 5010 scope.go:117] "RemoveContainer" containerID="fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d" Feb 03 10:27:42 crc kubenswrapper[5010]: E0203 10:27:42.455606 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d\": container with ID starting with fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d not found: ID does not exist" containerID="fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.455677 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d"} err="failed to get container status \"fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d\": rpc error: code = NotFound desc = could not find container \"fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d\": container with ID starting with fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d not found: ID does not exist" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.569426 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.587135 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.603691 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:27:42 crc kubenswrapper[5010]: E0203 10:27:42.604350 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d95db89-dc92-4f4e-9371-a9dfcf2eb54e" containerName="nova-scheduler-scheduler" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.604418 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d95db89-dc92-4f4e-9371-a9dfcf2eb54e" containerName="nova-scheduler-scheduler" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.604679 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d95db89-dc92-4f4e-9371-a9dfcf2eb54e" containerName="nova-scheduler-scheduler" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.605372 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.608288 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.614855 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.637087 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6chss\" (UniqueName: \"kubernetes.io/projected/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-kube-api-access-6chss\") pod \"nova-scheduler-0\" (UID: \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.637463 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-config-data\") pod \"nova-scheduler-0\" (UID: \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.637520 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.739671 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-config-data\") pod \"nova-scheduler-0\" (UID: \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.739732 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.739849 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6chss\" (UniqueName: \"kubernetes.io/projected/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-kube-api-access-6chss\") pod \"nova-scheduler-0\" (UID: \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.745195 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.745692 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-config-data\") pod \"nova-scheduler-0\" (UID: \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.776002 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6chss\" (UniqueName: \"kubernetes.io/projected/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-kube-api-access-6chss\") pod \"nova-scheduler-0\" (UID: \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.256051 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.302320 5010 generic.go:334] "Generic (PLEG): container finished" podID="dae76c0d-99bf-42f4-8678-5c1693262ecc" containerID="c99bed3bf87dd9576980ecaf735b0a2713f9773f5d114b1af04d87bd2cd7c5e6" exitCode=0 Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.302460 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dae76c0d-99bf-42f4-8678-5c1693262ecc","Type":"ContainerDied","Data":"c99bed3bf87dd9576980ecaf735b0a2713f9773f5d114b1af04d87bd2cd7c5e6"} Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.305904 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c43ac79-0458-4b95-a9fd-26bc038c195b","Type":"ContainerStarted","Data":"a78044c6ee003f2a2c2b9afaa9ab8fb12ae812a98e2ee39a42b2fc304776640e"} Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.894797 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.920827 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.920805864 podStartE2EDuration="3.920805864s" podCreationTimestamp="2026-02-03 10:27:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:27:43.393820373 +0000 UTC m=+1533.549796512" watchObservedRunningTime="2026-02-03 10:27:43.920805864 +0000 UTC m=+1534.076781993" Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.971746 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae76c0d-99bf-42f4-8678-5c1693262ecc-combined-ca-bundle\") pod \"dae76c0d-99bf-42f4-8678-5c1693262ecc\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.972010 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndfcm\" (UniqueName: \"kubernetes.io/projected/dae76c0d-99bf-42f4-8678-5c1693262ecc-kube-api-access-ndfcm\") pod \"dae76c0d-99bf-42f4-8678-5c1693262ecc\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.972069 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dae76c0d-99bf-42f4-8678-5c1693262ecc-logs\") pod \"dae76c0d-99bf-42f4-8678-5c1693262ecc\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.972156 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae76c0d-99bf-42f4-8678-5c1693262ecc-config-data\") pod \"dae76c0d-99bf-42f4-8678-5c1693262ecc\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.976703 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dae76c0d-99bf-42f4-8678-5c1693262ecc-logs" (OuterVolumeSpecName: "logs") pod "dae76c0d-99bf-42f4-8678-5c1693262ecc" (UID: "dae76c0d-99bf-42f4-8678-5c1693262ecc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.985443 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dae76c0d-99bf-42f4-8678-5c1693262ecc-kube-api-access-ndfcm" (OuterVolumeSpecName: "kube-api-access-ndfcm") pod "dae76c0d-99bf-42f4-8678-5c1693262ecc" (UID: "dae76c0d-99bf-42f4-8678-5c1693262ecc"). InnerVolumeSpecName "kube-api-access-ndfcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.995357 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.038816 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dae76c0d-99bf-42f4-8678-5c1693262ecc-config-data" (OuterVolumeSpecName: "config-data") pod "dae76c0d-99bf-42f4-8678-5c1693262ecc" (UID: "dae76c0d-99bf-42f4-8678-5c1693262ecc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.074603 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae76c0d-99bf-42f4-8678-5c1693262ecc-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.074650 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndfcm\" (UniqueName: \"kubernetes.io/projected/dae76c0d-99bf-42f4-8678-5c1693262ecc-kube-api-access-ndfcm\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.074667 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dae76c0d-99bf-42f4-8678-5c1693262ecc-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.085791 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dae76c0d-99bf-42f4-8678-5c1693262ecc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dae76c0d-99bf-42f4-8678-5c1693262ecc" (UID: "dae76c0d-99bf-42f4-8678-5c1693262ecc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.177681 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae76c0d-99bf-42f4-8678-5c1693262ecc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.337534 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a2d836d0-d303-41ca-9c8b-f714d6a4e76c","Type":"ContainerStarted","Data":"58f162aa3d6e537665ac2963288a9914168137aa741e22132f9fea00cc29574c"} Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.346418 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.349452 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dae76c0d-99bf-42f4-8678-5c1693262ecc","Type":"ContainerDied","Data":"6078c7a1e48bd775bca8b987098ebda1a5e82da5d6e8ba44c4019d49bd1f8dd5"} Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.349532 5010 scope.go:117] "RemoveContainer" containerID="c99bed3bf87dd9576980ecaf735b0a2713f9773f5d114b1af04d87bd2cd7c5e6" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.408366 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.420713 5010 scope.go:117] "RemoveContainer" containerID="241c9e9f88442e26f4c60b5bf7f593615d35fb056df34c097b437a3289e1ed1e" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.428044 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.470939 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 03 10:27:44 crc kubenswrapper[5010]: E0203 10:27:44.471530 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae76c0d-99bf-42f4-8678-5c1693262ecc" containerName="nova-api-api" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.471550 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae76c0d-99bf-42f4-8678-5c1693262ecc" containerName="nova-api-api" Feb 03 10:27:44 crc kubenswrapper[5010]: E0203 10:27:44.471567 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae76c0d-99bf-42f4-8678-5c1693262ecc" containerName="nova-api-log" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.471573 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae76c0d-99bf-42f4-8678-5c1693262ecc" containerName="nova-api-log" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.471777 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae76c0d-99bf-42f4-8678-5c1693262ecc" containerName="nova-api-api" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.471805 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae76c0d-99bf-42f4-8678-5c1693262ecc" containerName="nova-api-log" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.540298 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.546849 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.584933 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d95db89-dc92-4f4e-9371-a9dfcf2eb54e" path="/var/lib/kubelet/pods/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e/volumes" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.585831 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dae76c0d-99bf-42f4-8678-5c1693262ecc" path="/var/lib/kubelet/pods/dae76c0d-99bf-42f4-8678-5c1693262ecc/volumes" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.587365 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.700874 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/341c8347-e47b-42c7-ace7-acb55f2b8c0f-config-data\") pod \"nova-api-0\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.700981 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/341c8347-e47b-42c7-ace7-acb55f2b8c0f-logs\") pod \"nova-api-0\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.701492 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/341c8347-e47b-42c7-ace7-acb55f2b8c0f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.701875 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfbmc\" (UniqueName: \"kubernetes.io/projected/341c8347-e47b-42c7-ace7-acb55f2b8c0f-kube-api-access-lfbmc\") pod \"nova-api-0\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.806551 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfbmc\" (UniqueName: \"kubernetes.io/projected/341c8347-e47b-42c7-ace7-acb55f2b8c0f-kube-api-access-lfbmc\") pod \"nova-api-0\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.806786 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/341c8347-e47b-42c7-ace7-acb55f2b8c0f-config-data\") pod \"nova-api-0\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.806843 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/341c8347-e47b-42c7-ace7-acb55f2b8c0f-logs\") pod \"nova-api-0\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.806884 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/341c8347-e47b-42c7-ace7-acb55f2b8c0f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.808654 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/341c8347-e47b-42c7-ace7-acb55f2b8c0f-logs\") pod \"nova-api-0\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.819488 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/341c8347-e47b-42c7-ace7-acb55f2b8c0f-config-data\") pod \"nova-api-0\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.835604 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/341c8347-e47b-42c7-ace7-acb55f2b8c0f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.848778 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfbmc\" (UniqueName: \"kubernetes.io/projected/341c8347-e47b-42c7-ace7-acb55f2b8c0f-kube-api-access-lfbmc\") pod \"nova-api-0\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.891739 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:27:45 crc kubenswrapper[5010]: I0203 10:27:45.553349 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 10:27:45 crc kubenswrapper[5010]: I0203 10:27:45.555185 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 10:27:45 crc kubenswrapper[5010]: I0203 10:27:45.579168 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a2d836d0-d303-41ca-9c8b-f714d6a4e76c","Type":"ContainerStarted","Data":"3b3e32798695ef193d14b863df180f74f04391661ad55526322e40cae223bae3"} Feb 03 10:27:45 crc kubenswrapper[5010]: I0203 10:27:45.627142 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:27:45 crc kubenswrapper[5010]: W0203 10:27:45.634543 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod341c8347_e47b_42c7_ace7_acb55f2b8c0f.slice/crio-c47f6676aaf9cff804c2a71888dc81341a699bfd049b92c645db6bd9367bad06 WatchSource:0}: Error finding container c47f6676aaf9cff804c2a71888dc81341a699bfd049b92c645db6bd9367bad06: Status 404 returned error can't find the container with id c47f6676aaf9cff804c2a71888dc81341a699bfd049b92c645db6bd9367bad06 Feb 03 10:27:45 crc kubenswrapper[5010]: I0203 10:27:45.640017 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.6399853269999998 podStartE2EDuration="3.639985327s" podCreationTimestamp="2026-02-03 10:27:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:27:45.604172928 +0000 UTC m=+1535.760149057" watchObservedRunningTime="2026-02-03 10:27:45.639985327 +0000 UTC m=+1535.795961446" Feb 03 10:27:46 crc kubenswrapper[5010]: I0203 10:27:46.390132 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:27:46 crc kubenswrapper[5010]: I0203 10:27:46.390609 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:27:46 crc kubenswrapper[5010]: I0203 10:27:46.596807 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"341c8347-e47b-42c7-ace7-acb55f2b8c0f","Type":"ContainerStarted","Data":"af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb"} Feb 03 10:27:46 crc kubenswrapper[5010]: I0203 10:27:46.597156 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"341c8347-e47b-42c7-ace7-acb55f2b8c0f","Type":"ContainerStarted","Data":"28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6"} Feb 03 10:27:46 crc kubenswrapper[5010]: I0203 10:27:46.597193 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"341c8347-e47b-42c7-ace7-acb55f2b8c0f","Type":"ContainerStarted","Data":"c47f6676aaf9cff804c2a71888dc81341a699bfd049b92c645db6bd9367bad06"} Feb 03 10:27:46 crc kubenswrapper[5010]: I0203 10:27:46.641101 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.6410710330000002 podStartE2EDuration="2.641071033s" podCreationTimestamp="2026-02-03 10:27:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:27:46.626047097 +0000 UTC m=+1536.782023216" watchObservedRunningTime="2026-02-03 10:27:46.641071033 +0000 UTC m=+1536.797047162" Feb 03 10:27:47 crc kubenswrapper[5010]: I0203 10:27:47.123936 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 10:27:47 crc kubenswrapper[5010]: I0203 10:27:47.125123 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="7b0ebfb6-7019-4de6-88df-b2161da95e9b" containerName="kube-state-metrics" containerID="cri-o://8566fd9acbf9b37a7c0e5b8b574fab43fa6c097fb1878bb86a8c41a2e79e2d53" gracePeriod=30 Feb 03 10:27:47 crc kubenswrapper[5010]: I0203 10:27:47.634612 5010 generic.go:334] "Generic (PLEG): container finished" podID="7b0ebfb6-7019-4de6-88df-b2161da95e9b" containerID="8566fd9acbf9b37a7c0e5b8b574fab43fa6c097fb1878bb86a8c41a2e79e2d53" exitCode=2 Feb 03 10:27:47 crc kubenswrapper[5010]: I0203 10:27:47.636419 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7b0ebfb6-7019-4de6-88df-b2161da95e9b","Type":"ContainerDied","Data":"8566fd9acbf9b37a7c0e5b8b574fab43fa6c097fb1878bb86a8c41a2e79e2d53"} Feb 03 10:27:47 crc kubenswrapper[5010]: I0203 10:27:47.777758 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 10:27:47 crc kubenswrapper[5010]: I0203 10:27:47.876235 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxkf4\" (UniqueName: \"kubernetes.io/projected/7b0ebfb6-7019-4de6-88df-b2161da95e9b-kube-api-access-lxkf4\") pod \"7b0ebfb6-7019-4de6-88df-b2161da95e9b\" (UID: \"7b0ebfb6-7019-4de6-88df-b2161da95e9b\") " Feb 03 10:27:47 crc kubenswrapper[5010]: I0203 10:27:47.899571 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b0ebfb6-7019-4de6-88df-b2161da95e9b-kube-api-access-lxkf4" (OuterVolumeSpecName: "kube-api-access-lxkf4") pod "7b0ebfb6-7019-4de6-88df-b2161da95e9b" (UID: "7b0ebfb6-7019-4de6-88df-b2161da95e9b"). InnerVolumeSpecName "kube-api-access-lxkf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:47 crc kubenswrapper[5010]: I0203 10:27:47.979956 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxkf4\" (UniqueName: \"kubernetes.io/projected/7b0ebfb6-7019-4de6-88df-b2161da95e9b-kube-api-access-lxkf4\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.262391 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.648127 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7b0ebfb6-7019-4de6-88df-b2161da95e9b","Type":"ContainerDied","Data":"99eae2ce273fff1db7b69f1325ef839ad84ecc780d3634ec59776f868fb7d556"} Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.648176 5010 scope.go:117] "RemoveContainer" containerID="8566fd9acbf9b37a7c0e5b8b574fab43fa6c097fb1878bb86a8c41a2e79e2d53" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.648328 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.696937 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.713685 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.723397 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.732775 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 10:27:48 crc kubenswrapper[5010]: E0203 10:27:48.733467 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b0ebfb6-7019-4de6-88df-b2161da95e9b" containerName="kube-state-metrics" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.733489 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b0ebfb6-7019-4de6-88df-b2161da95e9b" containerName="kube-state-metrics" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.733732 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b0ebfb6-7019-4de6-88df-b2161da95e9b" containerName="kube-state-metrics" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.734635 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.738630 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.738642 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.778248 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.901621 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de374df0-0b73-4be2-9719-d4b471782ed4-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"de374df0-0b73-4be2-9719-d4b471782ed4\") " pod="openstack/kube-state-metrics-0" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.901979 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/de374df0-0b73-4be2-9719-d4b471782ed4-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"de374df0-0b73-4be2-9719-d4b471782ed4\") " pod="openstack/kube-state-metrics-0" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.902200 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6nx5\" (UniqueName: \"kubernetes.io/projected/de374df0-0b73-4be2-9719-d4b471782ed4-kube-api-access-h6nx5\") pod \"kube-state-metrics-0\" (UID: \"de374df0-0b73-4be2-9719-d4b471782ed4\") " pod="openstack/kube-state-metrics-0" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.902257 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/de374df0-0b73-4be2-9719-d4b471782ed4-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"de374df0-0b73-4be2-9719-d4b471782ed4\") " pod="openstack/kube-state-metrics-0" Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.165232 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6nx5\" (UniqueName: \"kubernetes.io/projected/de374df0-0b73-4be2-9719-d4b471782ed4-kube-api-access-h6nx5\") pod \"kube-state-metrics-0\" (UID: \"de374df0-0b73-4be2-9719-d4b471782ed4\") " pod="openstack/kube-state-metrics-0" Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.165307 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/de374df0-0b73-4be2-9719-d4b471782ed4-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"de374df0-0b73-4be2-9719-d4b471782ed4\") " pod="openstack/kube-state-metrics-0" Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.165414 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de374df0-0b73-4be2-9719-d4b471782ed4-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"de374df0-0b73-4be2-9719-d4b471782ed4\") " pod="openstack/kube-state-metrics-0" Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.165672 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/de374df0-0b73-4be2-9719-d4b471782ed4-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"de374df0-0b73-4be2-9719-d4b471782ed4\") " pod="openstack/kube-state-metrics-0" Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.174300 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/de374df0-0b73-4be2-9719-d4b471782ed4-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"de374df0-0b73-4be2-9719-d4b471782ed4\") " pod="openstack/kube-state-metrics-0" Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.175843 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/de374df0-0b73-4be2-9719-d4b471782ed4-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"de374df0-0b73-4be2-9719-d4b471782ed4\") " pod="openstack/kube-state-metrics-0" Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.178466 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de374df0-0b73-4be2-9719-d4b471782ed4-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"de374df0-0b73-4be2-9719-d4b471782ed4\") " pod="openstack/kube-state-metrics-0" Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.192470 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6nx5\" (UniqueName: \"kubernetes.io/projected/de374df0-0b73-4be2-9719-d4b471782ed4-kube-api-access-h6nx5\") pod \"kube-state-metrics-0\" (UID: \"de374df0-0b73-4be2-9719-d4b471782ed4\") " pod="openstack/kube-state-metrics-0" Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.365053 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 10:27:49 crc kubenswrapper[5010]: W0203 10:27:49.731984 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde374df0_0b73_4be2_9719_d4b471782ed4.slice/crio-aee893da80b4786c451fe90946be81becfbec886f6a9282b8ea893166a62a105 WatchSource:0}: Error finding container aee893da80b4786c451fe90946be81becfbec886f6a9282b8ea893166a62a105: Status 404 returned error can't find the container with id aee893da80b4786c451fe90946be81becfbec886f6a9282b8ea893166a62a105 Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.749533 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.863584 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.863963 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="ceilometer-central-agent" containerID="cri-o://bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee" gracePeriod=30 Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.864051 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="ceilometer-notification-agent" containerID="cri-o://9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd" gracePeriod=30 Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.864051 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="proxy-httpd" containerID="cri-o://7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e" gracePeriod=30 Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.864054 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="sg-core" containerID="cri-o://f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28" gracePeriod=30 Feb 03 10:27:50 crc kubenswrapper[5010]: I0203 10:27:50.518591 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b0ebfb6-7019-4de6-88df-b2161da95e9b" path="/var/lib/kubelet/pods/7b0ebfb6-7019-4de6-88df-b2161da95e9b/volumes" Feb 03 10:27:50 crc kubenswrapper[5010]: I0203 10:27:50.554020 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 03 10:27:50 crc kubenswrapper[5010]: I0203 10:27:50.557260 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 03 10:27:50 crc kubenswrapper[5010]: I0203 10:27:50.729704 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"de374df0-0b73-4be2-9719-d4b471782ed4","Type":"ContainerStarted","Data":"09638d8f14a0e6990096afb1a2128a2b41505deb68f4c5a411beb7b5380a0fba"} Feb 03 10:27:50 crc kubenswrapper[5010]: I0203 10:27:50.729799 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"de374df0-0b73-4be2-9719-d4b471782ed4","Type":"ContainerStarted","Data":"aee893da80b4786c451fe90946be81becfbec886f6a9282b8ea893166a62a105"} Feb 03 10:27:50 crc kubenswrapper[5010]: I0203 10:27:50.729870 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 03 10:27:50 crc kubenswrapper[5010]: I0203 10:27:50.762024 5010 generic.go:334] "Generic (PLEG): container finished" podID="07964b2d-a893-46b5-a01d-c479361c0d37" containerID="7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e" exitCode=0 Feb 03 10:27:50 crc kubenswrapper[5010]: I0203 10:27:50.763284 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07964b2d-a893-46b5-a01d-c479361c0d37","Type":"ContainerDied","Data":"7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e"} Feb 03 10:27:50 crc kubenswrapper[5010]: I0203 10:27:50.763405 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07964b2d-a893-46b5-a01d-c479361c0d37","Type":"ContainerDied","Data":"f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28"} Feb 03 10:27:50 crc kubenswrapper[5010]: I0203 10:27:50.763336 5010 generic.go:334] "Generic (PLEG): container finished" podID="07964b2d-a893-46b5-a01d-c479361c0d37" containerID="f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28" exitCode=2 Feb 03 10:27:50 crc kubenswrapper[5010]: I0203 10:27:50.791880 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.237819507 podStartE2EDuration="2.791852813s" podCreationTimestamp="2026-02-03 10:27:48 +0000 UTC" firstStartedPulling="2026-02-03 10:27:49.736842463 +0000 UTC m=+1539.892818582" lastFinishedPulling="2026-02-03 10:27:50.290875759 +0000 UTC m=+1540.446851888" observedRunningTime="2026-02-03 10:27:50.761685328 +0000 UTC m=+1540.917661457" watchObservedRunningTime="2026-02-03 10:27:50.791852813 +0000 UTC m=+1540.947828942" Feb 03 10:27:51 crc kubenswrapper[5010]: I0203 10:27:51.833054 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 10:27:51 crc kubenswrapper[5010]: I0203 10:27:51.840663 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 10:27:51 crc kubenswrapper[5010]: I0203 10:27:51.879936 5010 generic.go:334] "Generic (PLEG): container finished" podID="07964b2d-a893-46b5-a01d-c479361c0d37" containerID="bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee" exitCode=0 Feb 03 10:27:51 crc kubenswrapper[5010]: I0203 10:27:51.880921 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07964b2d-a893-46b5-a01d-c479361c0d37","Type":"ContainerDied","Data":"bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee"} Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.608134 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.671326 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-config-data\") pod \"07964b2d-a893-46b5-a01d-c479361c0d37\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.671395 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07964b2d-a893-46b5-a01d-c479361c0d37-run-httpd\") pod \"07964b2d-a893-46b5-a01d-c479361c0d37\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.671495 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07964b2d-a893-46b5-a01d-c479361c0d37-log-httpd\") pod \"07964b2d-a893-46b5-a01d-c479361c0d37\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.671562 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-combined-ca-bundle\") pod \"07964b2d-a893-46b5-a01d-c479361c0d37\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.671587 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-sg-core-conf-yaml\") pod \"07964b2d-a893-46b5-a01d-c479361c0d37\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.671762 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-scripts\") pod \"07964b2d-a893-46b5-a01d-c479361c0d37\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.671874 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mzfj\" (UniqueName: \"kubernetes.io/projected/07964b2d-a893-46b5-a01d-c479361c0d37-kube-api-access-2mzfj\") pod \"07964b2d-a893-46b5-a01d-c479361c0d37\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.672963 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07964b2d-a893-46b5-a01d-c479361c0d37-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "07964b2d-a893-46b5-a01d-c479361c0d37" (UID: "07964b2d-a893-46b5-a01d-c479361c0d37"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.673447 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07964b2d-a893-46b5-a01d-c479361c0d37-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "07964b2d-a893-46b5-a01d-c479361c0d37" (UID: "07964b2d-a893-46b5-a01d-c479361c0d37"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.681391 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07964b2d-a893-46b5-a01d-c479361c0d37-kube-api-access-2mzfj" (OuterVolumeSpecName: "kube-api-access-2mzfj") pod "07964b2d-a893-46b5-a01d-c479361c0d37" (UID: "07964b2d-a893-46b5-a01d-c479361c0d37"). InnerVolumeSpecName "kube-api-access-2mzfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.687815 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-scripts" (OuterVolumeSpecName: "scripts") pod "07964b2d-a893-46b5-a01d-c479361c0d37" (UID: "07964b2d-a893-46b5-a01d-c479361c0d37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.762371 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "07964b2d-a893-46b5-a01d-c479361c0d37" (UID: "07964b2d-a893-46b5-a01d-c479361c0d37"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.804513 5010 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07964b2d-a893-46b5-a01d-c479361c0d37-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.804701 5010 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07964b2d-a893-46b5-a01d-c479361c0d37-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.804802 5010 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.804884 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.804953 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mzfj\" (UniqueName: \"kubernetes.io/projected/07964b2d-a893-46b5-a01d-c479361c0d37-kube-api-access-2mzfj\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.814689 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07964b2d-a893-46b5-a01d-c479361c0d37" (UID: "07964b2d-a893-46b5-a01d-c479361c0d37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.840312 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-config-data" (OuterVolumeSpecName: "config-data") pod "07964b2d-a893-46b5-a01d-c479361c0d37" (UID: "07964b2d-a893-46b5-a01d-c479361c0d37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.900960 5010 generic.go:334] "Generic (PLEG): container finished" podID="07964b2d-a893-46b5-a01d-c479361c0d37" containerID="9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd" exitCode=0 Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.901376 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07964b2d-a893-46b5-a01d-c479361c0d37","Type":"ContainerDied","Data":"9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd"} Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.901498 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07964b2d-a893-46b5-a01d-c479361c0d37","Type":"ContainerDied","Data":"cd6841d336caf71fc510297facb1277599cbdeca80d5b944442ca08505d329ae"} Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.901617 5010 scope.go:117] "RemoveContainer" containerID="7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.901822 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.913818 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.913867 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.955143 5010 scope.go:117] "RemoveContainer" containerID="f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.969993 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.981438 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.008973 5010 scope.go:117] "RemoveContainer" containerID="9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.013043 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:27:53 crc kubenswrapper[5010]: E0203 10:27:53.013816 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="proxy-httpd" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.013979 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="proxy-httpd" Feb 03 10:27:53 crc kubenswrapper[5010]: E0203 10:27:53.014091 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="sg-core" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.017575 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="sg-core" Feb 03 10:27:53 crc kubenswrapper[5010]: E0203 10:27:53.017826 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="ceilometer-notification-agent" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.017916 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="ceilometer-notification-agent" Feb 03 10:27:53 crc kubenswrapper[5010]: E0203 10:27:53.018029 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="ceilometer-central-agent" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.018143 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="ceilometer-central-agent" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.018637 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="proxy-httpd" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.018751 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="ceilometer-notification-agent" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.018849 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="sg-core" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.018922 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="ceilometer-central-agent" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.023565 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.028349 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.040395 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.045948 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.052270 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.066804 5010 scope.go:117] "RemoveContainer" containerID="bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.163641 5010 scope.go:117] "RemoveContainer" containerID="7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e" Feb 03 10:27:53 crc kubenswrapper[5010]: E0203 10:27:53.165100 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e\": container with ID starting with 7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e not found: ID does not exist" containerID="7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.165151 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e"} err="failed to get container status \"7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e\": rpc error: code = NotFound desc = could not find container \"7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e\": container with ID starting with 7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e not found: ID does not exist" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.165189 5010 scope.go:117] "RemoveContainer" containerID="f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28" Feb 03 10:27:53 crc kubenswrapper[5010]: E0203 10:27:53.165978 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28\": container with ID starting with f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28 not found: ID does not exist" containerID="f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.166002 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28"} err="failed to get container status \"f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28\": rpc error: code = NotFound desc = could not find container \"f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28\": container with ID starting with f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28 not found: ID does not exist" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.166040 5010 scope.go:117] "RemoveContainer" containerID="9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd" Feb 03 10:27:53 crc kubenswrapper[5010]: E0203 10:27:53.166549 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd\": container with ID starting with 9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd not found: ID does not exist" containerID="9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.166576 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd"} err="failed to get container status \"9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd\": rpc error: code = NotFound desc = could not find container \"9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd\": container with ID starting with 9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd not found: ID does not exist" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.166612 5010 scope.go:117] "RemoveContainer" containerID="bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee" Feb 03 10:27:53 crc kubenswrapper[5010]: E0203 10:27:53.167013 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee\": container with ID starting with bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee not found: ID does not exist" containerID="bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.167031 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee"} err="failed to get container status \"bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee\": rpc error: code = NotFound desc = could not find container \"bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee\": container with ID starting with bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee not found: ID does not exist" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.225417 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/124e7652-b5a0-4a37-af4e-03b4585b6d71-log-httpd\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.225502 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/124e7652-b5a0-4a37-af4e-03b4585b6d71-run-httpd\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.225544 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-scripts\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.225969 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77mvm\" (UniqueName: \"kubernetes.io/projected/124e7652-b5a0-4a37-af4e-03b4585b6d71-kube-api-access-77mvm\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.226045 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.226588 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.226725 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-config-data\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.226757 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.258807 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.294000 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.329809 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-config-data\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.329868 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.329894 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/124e7652-b5a0-4a37-af4e-03b4585b6d71-log-httpd\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.329921 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/124e7652-b5a0-4a37-af4e-03b4585b6d71-run-httpd\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.329952 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-scripts\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.330012 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77mvm\" (UniqueName: \"kubernetes.io/projected/124e7652-b5a0-4a37-af4e-03b4585b6d71-kube-api-access-77mvm\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.330081 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.330233 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.332898 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/124e7652-b5a0-4a37-af4e-03b4585b6d71-log-httpd\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.333012 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/124e7652-b5a0-4a37-af4e-03b4585b6d71-run-httpd\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.337175 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.337964 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-scripts\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.341963 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.342908 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.347253 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-config-data\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.351326 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77mvm\" (UniqueName: \"kubernetes.io/projected/124e7652-b5a0-4a37-af4e-03b4585b6d71-kube-api-access-77mvm\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.359648 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: W0203 10:27:53.723958 5010 container.go:586] Failed to update stats for container "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e6ce46b_7ed7_48c5_a09c_cb39ec7bf34b.slice/crio-df9fac7aaf04d2b9be17b46f0957ab58bf3f75ddd22ffd12e196051104d34ede": error while statting cgroup v2: [unable to parse /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e6ce46b_7ed7_48c5_a09c_cb39ec7bf34b.slice/crio-df9fac7aaf04d2b9be17b46f0957ab58bf3f75ddd22ffd12e196051104d34ede/memory.stat: read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e6ce46b_7ed7_48c5_a09c_cb39ec7bf34b.slice/crio-df9fac7aaf04d2b9be17b46f0957ab58bf3f75ddd22ffd12e196051104d34ede/memory.stat: no such device], continuing to push stats Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.947664 5010 generic.go:334] "Generic (PLEG): container finished" podID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerID="ccb768185c1be80c1cf2232c6f15632edb6af133c55f2bd369d8a13606beb3d6" exitCode=137 Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.947712 5010 generic.go:334] "Generic (PLEG): container finished" podID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerID="d39b7b37971eb5d63b6cabefb740041e4cc9cc6265fc84bc4b6ff52605291d6a" exitCode=137 Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.947808 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cdcd56868-k9h7g" event={"ID":"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b","Type":"ContainerDied","Data":"ccb768185c1be80c1cf2232c6f15632edb6af133c55f2bd369d8a13606beb3d6"} Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.947847 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cdcd56868-k9h7g" event={"ID":"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b","Type":"ContainerDied","Data":"d39b7b37971eb5d63b6cabefb740041e4cc9cc6265fc84bc4b6ff52605291d6a"} Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.947881 5010 scope.go:117] "RemoveContainer" containerID="4e9bc8f0d6381cd12e012dcf3fe06eb0672b376af0b818c286309997a48dc607" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.011519 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.062491 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.367546 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.415977 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-config-data\") pod \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.416182 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnlnb\" (UniqueName: \"kubernetes.io/projected/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-kube-api-access-mnlnb\") pod \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.416318 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-logs\") pod \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.417256 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-logs" (OuterVolumeSpecName: "logs") pod "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" (UID: "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.417378 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-combined-ca-bundle\") pod \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.417797 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-horizon-tls-certs\") pod \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.417921 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-scripts\") pod \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.418006 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-horizon-secret-key\") pod \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.418558 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.426360 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" (UID: "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.426472 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-kube-api-access-mnlnb" (OuterVolumeSpecName: "kube-api-access-mnlnb") pod "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" (UID: "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b"). InnerVolumeSpecName "kube-api-access-mnlnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.454689 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-config-data" (OuterVolumeSpecName: "config-data") pod "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" (UID: "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.465566 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-scripts" (OuterVolumeSpecName: "scripts") pod "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" (UID: "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.483415 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" (UID: "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.516037 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" (UID: "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.517702 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" path="/var/lib/kubelet/pods/07964b2d-a893-46b5-a01d-c479361c0d37/volumes" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.523117 5010 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.523159 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.523172 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnlnb\" (UniqueName: \"kubernetes.io/projected/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-kube-api-access-mnlnb\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.523189 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.523204 5010 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.523231 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:55 crc kubenswrapper[5010]: I0203 10:27:54.988378 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 10:27:55 crc kubenswrapper[5010]: I0203 10:27:55.018425 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 10:27:55 crc kubenswrapper[5010]: I0203 10:27:55.017450 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:27:55 crc kubenswrapper[5010]: I0203 10:27:55.018473 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cdcd56868-k9h7g" event={"ID":"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b","Type":"ContainerDied","Data":"df9fac7aaf04d2b9be17b46f0957ab58bf3f75ddd22ffd12e196051104d34ede"} Feb 03 10:27:55 crc kubenswrapper[5010]: I0203 10:27:55.018552 5010 scope.go:117] "RemoveContainer" containerID="ccb768185c1be80c1cf2232c6f15632edb6af133c55f2bd369d8a13606beb3d6" Feb 03 10:27:55 crc kubenswrapper[5010]: I0203 10:27:55.031573 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"124e7652-b5a0-4a37-af4e-03b4585b6d71","Type":"ContainerStarted","Data":"8b65fa50da6f4624928ff97940b1b888dbd6125f5954bb57d55b8b921aea3ffc"} Feb 03 10:27:55 crc kubenswrapper[5010]: I0203 10:27:55.059140 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7cdcd56868-k9h7g"] Feb 03 10:27:55 crc kubenswrapper[5010]: I0203 10:27:55.073707 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7cdcd56868-k9h7g"] Feb 03 10:27:55 crc kubenswrapper[5010]: I0203 10:27:55.266993 5010 scope.go:117] "RemoveContainer" containerID="d39b7b37971eb5d63b6cabefb740041e4cc9cc6265fc84bc4b6ff52605291d6a" Feb 03 10:27:56 crc kubenswrapper[5010]: I0203 10:27:56.028534 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 10:27:56 crc kubenswrapper[5010]: I0203 10:27:56.049854 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"124e7652-b5a0-4a37-af4e-03b4585b6d71","Type":"ContainerStarted","Data":"e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05"} Feb 03 10:27:56 crc kubenswrapper[5010]: I0203 10:27:56.069607 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 10:27:56 crc kubenswrapper[5010]: I0203 10:27:56.512821 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" path="/var/lib/kubelet/pods/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b/volumes" Feb 03 10:27:57 crc kubenswrapper[5010]: I0203 10:27:57.064163 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"124e7652-b5a0-4a37-af4e-03b4585b6d71","Type":"ContainerStarted","Data":"640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec"} Feb 03 10:27:57 crc kubenswrapper[5010]: I0203 10:27:57.064670 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"124e7652-b5a0-4a37-af4e-03b4585b6d71","Type":"ContainerStarted","Data":"cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81"} Feb 03 10:27:58 crc kubenswrapper[5010]: I0203 10:27:58.080998 5010 generic.go:334] "Generic (PLEG): container finished" podID="4df0ad18-8721-40ef-91bc-c609d61f1c1b" containerID="ae9cd98547d8fff1706d863c1e8f43d79f4ce19a78307424e4a816129ff20e12" exitCode=137 Feb 03 10:27:58 crc kubenswrapper[5010]: I0203 10:27:58.081049 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4df0ad18-8721-40ef-91bc-c609d61f1c1b","Type":"ContainerDied","Data":"ae9cd98547d8fff1706d863c1e8f43d79f4ce19a78307424e4a816129ff20e12"} Feb 03 10:27:58 crc kubenswrapper[5010]: I0203 10:27:58.239318 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:58 crc kubenswrapper[5010]: I0203 10:27:58.401480 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx5mj\" (UniqueName: \"kubernetes.io/projected/4df0ad18-8721-40ef-91bc-c609d61f1c1b-kube-api-access-wx5mj\") pod \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\" (UID: \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\") " Feb 03 10:27:58 crc kubenswrapper[5010]: I0203 10:27:58.401664 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df0ad18-8721-40ef-91bc-c609d61f1c1b-combined-ca-bundle\") pod \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\" (UID: \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\") " Feb 03 10:27:58 crc kubenswrapper[5010]: I0203 10:27:58.402111 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df0ad18-8721-40ef-91bc-c609d61f1c1b-config-data\") pod \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\" (UID: \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\") " Feb 03 10:27:58 crc kubenswrapper[5010]: I0203 10:27:58.409049 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4df0ad18-8721-40ef-91bc-c609d61f1c1b-kube-api-access-wx5mj" (OuterVolumeSpecName: "kube-api-access-wx5mj") pod "4df0ad18-8721-40ef-91bc-c609d61f1c1b" (UID: "4df0ad18-8721-40ef-91bc-c609d61f1c1b"). InnerVolumeSpecName "kube-api-access-wx5mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:58 crc kubenswrapper[5010]: I0203 10:27:58.437517 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df0ad18-8721-40ef-91bc-c609d61f1c1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4df0ad18-8721-40ef-91bc-c609d61f1c1b" (UID: "4df0ad18-8721-40ef-91bc-c609d61f1c1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:58 crc kubenswrapper[5010]: I0203 10:27:58.450766 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df0ad18-8721-40ef-91bc-c609d61f1c1b-config-data" (OuterVolumeSpecName: "config-data") pod "4df0ad18-8721-40ef-91bc-c609d61f1c1b" (UID: "4df0ad18-8721-40ef-91bc-c609d61f1c1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:58 crc kubenswrapper[5010]: I0203 10:27:58.505687 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df0ad18-8721-40ef-91bc-c609d61f1c1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:58 crc kubenswrapper[5010]: I0203 10:27:58.505808 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df0ad18-8721-40ef-91bc-c609d61f1c1b-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:58 crc kubenswrapper[5010]: I0203 10:27:58.505821 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wx5mj\" (UniqueName: \"kubernetes.io/projected/4df0ad18-8721-40ef-91bc-c609d61f1c1b-kube-api-access-wx5mj\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.098161 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4df0ad18-8721-40ef-91bc-c609d61f1c1b","Type":"ContainerDied","Data":"53f9f5ad7c65c9cd148ac8aad3fd34e98580d6dfe75ba51eece28e29be12ce47"} Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.098291 5010 scope.go:117] "RemoveContainer" containerID="ae9cd98547d8fff1706d863c1e8f43d79f4ce19a78307424e4a816129ff20e12" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.098442 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.142933 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.158399 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.179867 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 10:27:59 crc kubenswrapper[5010]: E0203 10:27:59.180354 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon-log" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.180367 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon-log" Feb 03 10:27:59 crc kubenswrapper[5010]: E0203 10:27:59.180387 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4df0ad18-8721-40ef-91bc-c609d61f1c1b" containerName="nova-cell1-novncproxy-novncproxy" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.180395 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4df0ad18-8721-40ef-91bc-c609d61f1c1b" containerName="nova-cell1-novncproxy-novncproxy" Feb 03 10:27:59 crc kubenswrapper[5010]: E0203 10:27:59.180419 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.180426 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" Feb 03 10:27:59 crc kubenswrapper[5010]: E0203 10:27:59.180448 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.180454 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" Feb 03 10:27:59 crc kubenswrapper[5010]: E0203 10:27:59.180466 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.180472 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.180638 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.180652 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.180671 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon-log" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.180681 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4df0ad18-8721-40ef-91bc-c609d61f1c1b" containerName="nova-cell1-novncproxy-novncproxy" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.181361 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.185183 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.186415 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.192007 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.192696 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.232083 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9bd4788-ae5f-49c4-8116-04076a16f4f1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.232148 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9bd4788-ae5f-49c4-8116-04076a16f4f1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.232358 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9bd4788-ae5f-49c4-8116-04076a16f4f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.232429 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdzxx\" (UniqueName: \"kubernetes.io/projected/c9bd4788-ae5f-49c4-8116-04076a16f4f1-kube-api-access-rdzxx\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.232498 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9bd4788-ae5f-49c4-8116-04076a16f4f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.387979 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9bd4788-ae5f-49c4-8116-04076a16f4f1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.388041 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9bd4788-ae5f-49c4-8116-04076a16f4f1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.388112 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9bd4788-ae5f-49c4-8116-04076a16f4f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.388131 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdzxx\" (UniqueName: \"kubernetes.io/projected/c9bd4788-ae5f-49c4-8116-04076a16f4f1-kube-api-access-rdzxx\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.388161 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9bd4788-ae5f-49c4-8116-04076a16f4f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.392935 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9bd4788-ae5f-49c4-8116-04076a16f4f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.395106 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9bd4788-ae5f-49c4-8116-04076a16f4f1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.395606 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9bd4788-ae5f-49c4-8116-04076a16f4f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.404435 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9bd4788-ae5f-49c4-8116-04076a16f4f1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.411126 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.416703 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdzxx\" (UniqueName: \"kubernetes.io/projected/c9bd4788-ae5f-49c4-8116-04076a16f4f1-kube-api-access-rdzxx\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.527959 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:28:00 crc kubenswrapper[5010]: I0203 10:28:00.059118 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 10:28:00 crc kubenswrapper[5010]: I0203 10:28:00.110271 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"124e7652-b5a0-4a37-af4e-03b4585b6d71","Type":"ContainerStarted","Data":"f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398"} Feb 03 10:28:00 crc kubenswrapper[5010]: I0203 10:28:00.110931 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 10:28:00 crc kubenswrapper[5010]: I0203 10:28:00.112993 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c9bd4788-ae5f-49c4-8116-04076a16f4f1","Type":"ContainerStarted","Data":"e7cd8fc8c77f5abe94ae0b642f56d423fa0b49fe1c31e908ec0f6a21151fee4a"} Feb 03 10:28:00 crc kubenswrapper[5010]: I0203 10:28:00.195307 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.315045944 podStartE2EDuration="8.195280036s" podCreationTimestamp="2026-02-03 10:27:52 +0000 UTC" firstStartedPulling="2026-02-03 10:27:54.219510485 +0000 UTC m=+1544.375486614" lastFinishedPulling="2026-02-03 10:27:59.099744577 +0000 UTC m=+1549.255720706" observedRunningTime="2026-02-03 10:28:00.154771736 +0000 UTC m=+1550.310747875" watchObservedRunningTime="2026-02-03 10:28:00.195280036 +0000 UTC m=+1550.351256175" Feb 03 10:28:00 crc kubenswrapper[5010]: I0203 10:28:00.519388 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4df0ad18-8721-40ef-91bc-c609d61f1c1b" path="/var/lib/kubelet/pods/4df0ad18-8721-40ef-91bc-c609d61f1c1b/volumes" Feb 03 10:28:00 crc kubenswrapper[5010]: I0203 10:28:00.559526 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 03 10:28:00 crc kubenswrapper[5010]: I0203 10:28:00.563586 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 03 10:28:00 crc kubenswrapper[5010]: I0203 10:28:00.571592 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 03 10:28:01 crc kubenswrapper[5010]: I0203 10:28:01.129986 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c9bd4788-ae5f-49c4-8116-04076a16f4f1","Type":"ContainerStarted","Data":"6a88f5fdc033f5a697a9e171054489437d18d090c69fe63c010ae837224670c9"} Feb 03 10:28:01 crc kubenswrapper[5010]: I0203 10:28:01.140153 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 03 10:28:01 crc kubenswrapper[5010]: I0203 10:28:01.154869 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.154838355 podStartE2EDuration="2.154838355s" podCreationTimestamp="2026-02-03 10:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:28:01.148661057 +0000 UTC m=+1551.304637186" watchObservedRunningTime="2026-02-03 10:28:01.154838355 +0000 UTC m=+1551.310814484" Feb 03 10:28:04 crc kubenswrapper[5010]: I0203 10:28:04.529141 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:28:04 crc kubenswrapper[5010]: I0203 10:28:04.897183 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 03 10:28:04 crc kubenswrapper[5010]: I0203 10:28:04.897906 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 10:28:04 crc kubenswrapper[5010]: I0203 10:28:04.898657 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 03 10:28:04 crc kubenswrapper[5010]: I0203 10:28:04.904039 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.197899 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.206042 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.460871 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-5t6hf"] Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.479503 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.480661 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-5t6hf"] Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.480757 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.642425 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm9pt\" (UniqueName: \"kubernetes.io/projected/112eb3e9-cf11-4513-be2d-53a42670413e-kube-api-access-pm9pt\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.643150 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.643261 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-config\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.643401 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.643550 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.643679 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.746142 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.746258 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.746325 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.746430 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm9pt\" (UniqueName: \"kubernetes.io/projected/112eb3e9-cf11-4513-be2d-53a42670413e-kube-api-access-pm9pt\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.746487 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.746528 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-config\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.747430 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.747629 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.747687 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.748258 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.748324 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-config\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.770003 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm9pt\" (UniqueName: \"kubernetes.io/projected/112eb3e9-cf11-4513-be2d-53a42670413e-kube-api-access-pm9pt\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.810843 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:06 crc kubenswrapper[5010]: W0203 10:28:06.713360 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod112eb3e9_cf11_4513_be2d_53a42670413e.slice/crio-9696bbc5c05e1ee911f02b7758d1162dc7d17512676a3ce246b9266d4a35accd WatchSource:0}: Error finding container 9696bbc5c05e1ee911f02b7758d1162dc7d17512676a3ce246b9266d4a35accd: Status 404 returned error can't find the container with id 9696bbc5c05e1ee911f02b7758d1162dc7d17512676a3ce246b9266d4a35accd Feb 03 10:28:06 crc kubenswrapper[5010]: I0203 10:28:06.718420 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-5t6hf"] Feb 03 10:28:07 crc kubenswrapper[5010]: I0203 10:28:07.342151 5010 generic.go:334] "Generic (PLEG): container finished" podID="112eb3e9-cf11-4513-be2d-53a42670413e" containerID="84b72c9b54d05dcdbccb71e2a8f9d59046f32de5c34fe094370a4de1492b0639" exitCode=0 Feb 03 10:28:07 crc kubenswrapper[5010]: I0203 10:28:07.342273 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" event={"ID":"112eb3e9-cf11-4513-be2d-53a42670413e","Type":"ContainerDied","Data":"84b72c9b54d05dcdbccb71e2a8f9d59046f32de5c34fe094370a4de1492b0639"} Feb 03 10:28:07 crc kubenswrapper[5010]: I0203 10:28:07.342942 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" event={"ID":"112eb3e9-cf11-4513-be2d-53a42670413e","Type":"ContainerStarted","Data":"9696bbc5c05e1ee911f02b7758d1162dc7d17512676a3ce246b9266d4a35accd"} Feb 03 10:28:08 crc kubenswrapper[5010]: I0203 10:28:08.358503 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" event={"ID":"112eb3e9-cf11-4513-be2d-53a42670413e","Type":"ContainerStarted","Data":"e50968d30732ac2c762348838c8f14a711f5720b5d244d0a09fd6ce7ae975514"} Feb 03 10:28:08 crc kubenswrapper[5010]: I0203 10:28:08.359361 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:08 crc kubenswrapper[5010]: I0203 10:28:08.401799 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" podStartSLOduration=3.401764178 podStartE2EDuration="3.401764178s" podCreationTimestamp="2026-02-03 10:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:28:08.382042404 +0000 UTC m=+1558.538018553" watchObservedRunningTime="2026-02-03 10:28:08.401764178 +0000 UTC m=+1558.557740307" Feb 03 10:28:08 crc kubenswrapper[5010]: I0203 10:28:08.638283 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:28:08 crc kubenswrapper[5010]: I0203 10:28:08.638694 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" containerName="nova-api-log" containerID="cri-o://28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6" gracePeriod=30 Feb 03 10:28:08 crc kubenswrapper[5010]: I0203 10:28:08.638788 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" containerName="nova-api-api" containerID="cri-o://af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb" gracePeriod=30 Feb 03 10:28:08 crc kubenswrapper[5010]: I0203 10:28:08.824561 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:28:08 crc kubenswrapper[5010]: I0203 10:28:08.824905 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="ceilometer-central-agent" containerID="cri-o://e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05" gracePeriod=30 Feb 03 10:28:08 crc kubenswrapper[5010]: I0203 10:28:08.824993 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="proxy-httpd" containerID="cri-o://f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398" gracePeriod=30 Feb 03 10:28:08 crc kubenswrapper[5010]: I0203 10:28:08.825086 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="sg-core" containerID="cri-o://640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec" gracePeriod=30 Feb 03 10:28:08 crc kubenswrapper[5010]: I0203 10:28:08.825131 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="ceilometer-notification-agent" containerID="cri-o://cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81" gracePeriod=30 Feb 03 10:28:09 crc kubenswrapper[5010]: I0203 10:28:09.508572 5010 generic.go:334] "Generic (PLEG): container finished" podID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerID="f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398" exitCode=0 Feb 03 10:28:09 crc kubenswrapper[5010]: I0203 10:28:09.508897 5010 generic.go:334] "Generic (PLEG): container finished" podID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerID="640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec" exitCode=2 Feb 03 10:28:09 crc kubenswrapper[5010]: I0203 10:28:09.508956 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"124e7652-b5a0-4a37-af4e-03b4585b6d71","Type":"ContainerDied","Data":"f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398"} Feb 03 10:28:09 crc kubenswrapper[5010]: I0203 10:28:09.508988 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"124e7652-b5a0-4a37-af4e-03b4585b6d71","Type":"ContainerDied","Data":"640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec"} Feb 03 10:28:09 crc kubenswrapper[5010]: I0203 10:28:09.530563 5010 generic.go:334] "Generic (PLEG): container finished" podID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" containerID="28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6" exitCode=143 Feb 03 10:28:09 crc kubenswrapper[5010]: I0203 10:28:09.531585 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"341c8347-e47b-42c7-ace7-acb55f2b8c0f","Type":"ContainerDied","Data":"28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6"} Feb 03 10:28:09 crc kubenswrapper[5010]: I0203 10:28:09.532140 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:28:09 crc kubenswrapper[5010]: I0203 10:28:09.605540 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.533610 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.542668 5010 generic.go:334] "Generic (PLEG): container finished" podID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerID="cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81" exitCode=0 Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.542701 5010 generic.go:334] "Generic (PLEG): container finished" podID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerID="e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05" exitCode=0 Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.542761 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.542734 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"124e7652-b5a0-4a37-af4e-03b4585b6d71","Type":"ContainerDied","Data":"cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81"} Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.542852 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"124e7652-b5a0-4a37-af4e-03b4585b6d71","Type":"ContainerDied","Data":"e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05"} Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.542864 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"124e7652-b5a0-4a37-af4e-03b4585b6d71","Type":"ContainerDied","Data":"8b65fa50da6f4624928ff97940b1b888dbd6125f5954bb57d55b8b921aea3ffc"} Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.542881 5010 scope.go:117] "RemoveContainer" containerID="f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.584330 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.584357 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-scripts\") pod \"124e7652-b5a0-4a37-af4e-03b4585b6d71\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.584605 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/124e7652-b5a0-4a37-af4e-03b4585b6d71-run-httpd\") pod \"124e7652-b5a0-4a37-af4e-03b4585b6d71\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.584674 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/124e7652-b5a0-4a37-af4e-03b4585b6d71-log-httpd\") pod \"124e7652-b5a0-4a37-af4e-03b4585b6d71\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.584783 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-sg-core-conf-yaml\") pod \"124e7652-b5a0-4a37-af4e-03b4585b6d71\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.584889 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-config-data\") pod \"124e7652-b5a0-4a37-af4e-03b4585b6d71\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.584933 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77mvm\" (UniqueName: \"kubernetes.io/projected/124e7652-b5a0-4a37-af4e-03b4585b6d71-kube-api-access-77mvm\") pod \"124e7652-b5a0-4a37-af4e-03b4585b6d71\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.585064 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-ceilometer-tls-certs\") pod \"124e7652-b5a0-4a37-af4e-03b4585b6d71\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.585155 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-combined-ca-bundle\") pod \"124e7652-b5a0-4a37-af4e-03b4585b6d71\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.592911 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/124e7652-b5a0-4a37-af4e-03b4585b6d71-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "124e7652-b5a0-4a37-af4e-03b4585b6d71" (UID: "124e7652-b5a0-4a37-af4e-03b4585b6d71"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.603423 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/124e7652-b5a0-4a37-af4e-03b4585b6d71-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "124e7652-b5a0-4a37-af4e-03b4585b6d71" (UID: "124e7652-b5a0-4a37-af4e-03b4585b6d71"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.610498 5010 scope.go:117] "RemoveContainer" containerID="640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.616459 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/124e7652-b5a0-4a37-af4e-03b4585b6d71-kube-api-access-77mvm" (OuterVolumeSpecName: "kube-api-access-77mvm") pod "124e7652-b5a0-4a37-af4e-03b4585b6d71" (UID: "124e7652-b5a0-4a37-af4e-03b4585b6d71"). InnerVolumeSpecName "kube-api-access-77mvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.620245 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-scripts" (OuterVolumeSpecName: "scripts") pod "124e7652-b5a0-4a37-af4e-03b4585b6d71" (UID: "124e7652-b5a0-4a37-af4e-03b4585b6d71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.694599 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "124e7652-b5a0-4a37-af4e-03b4585b6d71" (UID: "124e7652-b5a0-4a37-af4e-03b4585b6d71"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.702536 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.702661 5010 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/124e7652-b5a0-4a37-af4e-03b4585b6d71-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.702682 5010 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/124e7652-b5a0-4a37-af4e-03b4585b6d71-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.702699 5010 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.702723 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77mvm\" (UniqueName: \"kubernetes.io/projected/124e7652-b5a0-4a37-af4e-03b4585b6d71-kube-api-access-77mvm\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.718443 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "124e7652-b5a0-4a37-af4e-03b4585b6d71" (UID: "124e7652-b5a0-4a37-af4e-03b4585b6d71"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.725559 5010 scope.go:117] "RemoveContainer" containerID="cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.751841 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "124e7652-b5a0-4a37-af4e-03b4585b6d71" (UID: "124e7652-b5a0-4a37-af4e-03b4585b6d71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.779184 5010 scope.go:117] "RemoveContainer" containerID="e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.793790 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-fmn8g"] Feb 03 10:28:10 crc kubenswrapper[5010]: E0203 10:28:10.794838 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="ceilometer-central-agent" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.794868 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="ceilometer-central-agent" Feb 03 10:28:10 crc kubenswrapper[5010]: E0203 10:28:10.794897 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="proxy-httpd" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.794908 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="proxy-httpd" Feb 03 10:28:10 crc kubenswrapper[5010]: E0203 10:28:10.794953 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="sg-core" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.794961 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="sg-core" Feb 03 10:28:10 crc kubenswrapper[5010]: E0203 10:28:10.794981 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="ceilometer-notification-agent" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.794990 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="ceilometer-notification-agent" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.795305 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="ceilometer-notification-agent" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.795340 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="sg-core" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.795368 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="ceilometer-central-agent" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.795378 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="proxy-httpd" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.800001 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.804377 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-fmn8g"] Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.805487 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.805517 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.806580 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-config-data" (OuterVolumeSpecName: "config-data") pod "124e7652-b5a0-4a37-af4e-03b4585b6d71" (UID: "124e7652-b5a0-4a37-af4e-03b4585b6d71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.808430 5010 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.808463 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.808480 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.812381 5010 scope.go:117] "RemoveContainer" containerID="f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398" Feb 03 10:28:10 crc kubenswrapper[5010]: E0203 10:28:10.813024 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398\": container with ID starting with f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398 not found: ID does not exist" containerID="f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.813079 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398"} err="failed to get container status \"f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398\": rpc error: code = NotFound desc = could not find container \"f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398\": container with ID starting with f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398 not found: ID does not exist" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.813109 5010 scope.go:117] "RemoveContainer" containerID="640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec" Feb 03 10:28:10 crc kubenswrapper[5010]: E0203 10:28:10.813642 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec\": container with ID starting with 640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec not found: ID does not exist" containerID="640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.813684 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec"} err="failed to get container status \"640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec\": rpc error: code = NotFound desc = could not find container \"640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec\": container with ID starting with 640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec not found: ID does not exist" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.813727 5010 scope.go:117] "RemoveContainer" containerID="cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81" Feb 03 10:28:10 crc kubenswrapper[5010]: E0203 10:28:10.814085 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81\": container with ID starting with cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81 not found: ID does not exist" containerID="cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.814110 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81"} err="failed to get container status \"cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81\": rpc error: code = NotFound desc = could not find container \"cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81\": container with ID starting with cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81 not found: ID does not exist" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.814127 5010 scope.go:117] "RemoveContainer" containerID="e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05" Feb 03 10:28:10 crc kubenswrapper[5010]: E0203 10:28:10.814501 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05\": container with ID starting with e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05 not found: ID does not exist" containerID="e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.814521 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05"} err="failed to get container status \"e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05\": rpc error: code = NotFound desc = could not find container \"e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05\": container with ID starting with e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05 not found: ID does not exist" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.814537 5010 scope.go:117] "RemoveContainer" containerID="f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.815133 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398"} err="failed to get container status \"f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398\": rpc error: code = NotFound desc = could not find container \"f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398\": container with ID starting with f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398 not found: ID does not exist" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.815152 5010 scope.go:117] "RemoveContainer" containerID="640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.815679 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec"} err="failed to get container status \"640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec\": rpc error: code = NotFound desc = could not find container \"640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec\": container with ID starting with 640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec not found: ID does not exist" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.815705 5010 scope.go:117] "RemoveContainer" containerID="cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.816105 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81"} err="failed to get container status \"cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81\": rpc error: code = NotFound desc = could not find container \"cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81\": container with ID starting with cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81 not found: ID does not exist" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.816145 5010 scope.go:117] "RemoveContainer" containerID="e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.817011 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05"} err="failed to get container status \"e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05\": rpc error: code = NotFound desc = could not find container \"e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05\": container with ID starting with e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05 not found: ID does not exist" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.911233 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-fmn8g\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.911826 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhn74\" (UniqueName: \"kubernetes.io/projected/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-kube-api-access-hhn74\") pod \"nova-cell1-cell-mapping-fmn8g\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.911865 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-scripts\") pod \"nova-cell1-cell-mapping-fmn8g\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.911901 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-config-data\") pod \"nova-cell1-cell-mapping-fmn8g\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.930151 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.949965 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.966439 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.969583 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.972172 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.973250 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.974304 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.979396 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.018322 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhn74\" (UniqueName: \"kubernetes.io/projected/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-kube-api-access-hhn74\") pod \"nova-cell1-cell-mapping-fmn8g\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.018392 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-config-data\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.018439 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-scripts\") pod \"nova-cell1-cell-mapping-fmn8g\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.018478 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.018635 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-config-data\") pod \"nova-cell1-cell-mapping-fmn8g\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.018702 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.019061 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.019167 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-log-httpd\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.019283 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5zz8\" (UniqueName: \"kubernetes.io/projected/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-kube-api-access-z5zz8\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.019324 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-fmn8g\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.019384 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-run-httpd\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.019443 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-scripts\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.024425 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-fmn8g\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.024503 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-config-data\") pod \"nova-cell1-cell-mapping-fmn8g\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.027991 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-scripts\") pod \"nova-cell1-cell-mapping-fmn8g\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.035447 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhn74\" (UniqueName: \"kubernetes.io/projected/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-kube-api-access-hhn74\") pod \"nova-cell1-cell-mapping-fmn8g\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.123812 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-config-data\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.123895 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.123927 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.124047 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.124110 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-log-httpd\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.124166 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5zz8\" (UniqueName: \"kubernetes.io/projected/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-kube-api-access-z5zz8\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.124198 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-run-httpd\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.124332 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-scripts\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.126040 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-run-httpd\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.128534 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-log-httpd\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.131134 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.131321 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.131573 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.131981 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-config-data\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.132817 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.133692 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-scripts\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.146503 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5zz8\" (UniqueName: \"kubernetes.io/projected/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-kube-api-access-z5zz8\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.309075 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.637789 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-fmn8g"] Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.853748 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.971277 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.363075 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.473960 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/341c8347-e47b-42c7-ace7-acb55f2b8c0f-config-data\") pod \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.474016 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/341c8347-e47b-42c7-ace7-acb55f2b8c0f-logs\") pod \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.474135 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/341c8347-e47b-42c7-ace7-acb55f2b8c0f-combined-ca-bundle\") pod \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.474227 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfbmc\" (UniqueName: \"kubernetes.io/projected/341c8347-e47b-42c7-ace7-acb55f2b8c0f-kube-api-access-lfbmc\") pod \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.474906 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/341c8347-e47b-42c7-ace7-acb55f2b8c0f-logs" (OuterVolumeSpecName: "logs") pod "341c8347-e47b-42c7-ace7-acb55f2b8c0f" (UID: "341c8347-e47b-42c7-ace7-acb55f2b8c0f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.485236 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/341c8347-e47b-42c7-ace7-acb55f2b8c0f-kube-api-access-lfbmc" (OuterVolumeSpecName: "kube-api-access-lfbmc") pod "341c8347-e47b-42c7-ace7-acb55f2b8c0f" (UID: "341c8347-e47b-42c7-ace7-acb55f2b8c0f"). InnerVolumeSpecName "kube-api-access-lfbmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.528853 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/341c8347-e47b-42c7-ace7-acb55f2b8c0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "341c8347-e47b-42c7-ace7-acb55f2b8c0f" (UID: "341c8347-e47b-42c7-ace7-acb55f2b8c0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.537964 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" path="/var/lib/kubelet/pods/124e7652-b5a0-4a37-af4e-03b4585b6d71/volumes" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.546798 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/341c8347-e47b-42c7-ace7-acb55f2b8c0f-config-data" (OuterVolumeSpecName: "config-data") pod "341c8347-e47b-42c7-ace7-acb55f2b8c0f" (UID: "341c8347-e47b-42c7-ace7-acb55f2b8c0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.578066 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/341c8347-e47b-42c7-ace7-acb55f2b8c0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.578101 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfbmc\" (UniqueName: \"kubernetes.io/projected/341c8347-e47b-42c7-ace7-acb55f2b8c0f-kube-api-access-lfbmc\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.578111 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/341c8347-e47b-42c7-ace7-acb55f2b8c0f-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.578121 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/341c8347-e47b-42c7-ace7-acb55f2b8c0f-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.579131 5010 generic.go:334] "Generic (PLEG): container finished" podID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" containerID="af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb" exitCode=0 Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.579639 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"341c8347-e47b-42c7-ace7-acb55f2b8c0f","Type":"ContainerDied","Data":"af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb"} Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.579685 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"341c8347-e47b-42c7-ace7-acb55f2b8c0f","Type":"ContainerDied","Data":"c47f6676aaf9cff804c2a71888dc81341a699bfd049b92c645db6bd9367bad06"} Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.579709 5010 scope.go:117] "RemoveContainer" containerID="af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.579907 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.589626 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-fmn8g" event={"ID":"900a4dd0-c8e2-4416-9a0e-8fff95a5053b","Type":"ContainerStarted","Data":"79dc7129a99144c2e59b3fda9930b79947c9ac7a248d6f8abe7b85572f2f5ea2"} Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.589694 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-fmn8g" event={"ID":"900a4dd0-c8e2-4416-9a0e-8fff95a5053b","Type":"ContainerStarted","Data":"5e355931a7d8aee1e5fce1e85e08f90a6fc5e4e40c3b64d40ecde61b241ba2a4"} Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.600092 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff","Type":"ContainerStarted","Data":"4d55ccaf8e8fbc23ae8d8fcb578bf3c1e898e367f6ccb3f3993272add85d622a"} Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.763568 5010 scope.go:117] "RemoveContainer" containerID="28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.792685 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-fmn8g" podStartSLOduration=2.7926529970000002 podStartE2EDuration="2.792652997s" podCreationTimestamp="2026-02-03 10:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:28:12.625583894 +0000 UTC m=+1562.781560033" watchObservedRunningTime="2026-02-03 10:28:12.792652997 +0000 UTC m=+1562.948629126" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.838576 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.865372 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.865458 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 03 10:28:12 crc kubenswrapper[5010]: E0203 10:28:12.866037 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" containerName="nova-api-api" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.866054 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" containerName="nova-api-api" Feb 03 10:28:12 crc kubenswrapper[5010]: E0203 10:28:12.866097 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" containerName="nova-api-log" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.866103 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" containerName="nova-api-log" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.866318 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" containerName="nova-api-api" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.866335 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" containerName="nova-api-log" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.867545 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.872509 5010 scope.go:117] "RemoveContainer" containerID="af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.873642 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.873705 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.874083 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.876004 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:28:12 crc kubenswrapper[5010]: E0203 10:28:12.879297 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb\": container with ID starting with af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb not found: ID does not exist" containerID="af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.879358 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb"} err="failed to get container status \"af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb\": rpc error: code = NotFound desc = could not find container \"af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb\": container with ID starting with af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb not found: ID does not exist" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.879389 5010 scope.go:117] "RemoveContainer" containerID="28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6" Feb 03 10:28:12 crc kubenswrapper[5010]: E0203 10:28:12.885107 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6\": container with ID starting with 28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6 not found: ID does not exist" containerID="28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.885182 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6"} err="failed to get container status \"28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6\": rpc error: code = NotFound desc = could not find container \"28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6\": container with ID starting with 28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6 not found: ID does not exist" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.885755 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.885907 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-config-data\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.885943 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-logs\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.886084 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-public-tls-certs\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.886398 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slhxg\" (UniqueName: \"kubernetes.io/projected/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-kube-api-access-slhxg\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.886790 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.989120 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.989629 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-config-data\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.989726 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-logs\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.989848 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-public-tls-certs\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.990018 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slhxg\" (UniqueName: \"kubernetes.io/projected/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-kube-api-access-slhxg\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.990237 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.990719 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-logs\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.998435 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.998677 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-public-tls-certs\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:13 crc kubenswrapper[5010]: I0203 10:28:13.000159 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-config-data\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:13 crc kubenswrapper[5010]: I0203 10:28:13.000781 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:13 crc kubenswrapper[5010]: I0203 10:28:13.018509 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slhxg\" (UniqueName: \"kubernetes.io/projected/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-kube-api-access-slhxg\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:13 crc kubenswrapper[5010]: I0203 10:28:13.273009 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:28:13 crc kubenswrapper[5010]: I0203 10:28:13.618681 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff","Type":"ContainerStarted","Data":"21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e"} Feb 03 10:28:13 crc kubenswrapper[5010]: I0203 10:28:13.876422 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:28:14 crc kubenswrapper[5010]: I0203 10:28:14.600207 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" path="/var/lib/kubelet/pods/341c8347-e47b-42c7-ace7-acb55f2b8c0f/volumes" Feb 03 10:28:14 crc kubenswrapper[5010]: I0203 10:28:14.661452 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b","Type":"ContainerStarted","Data":"f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c"} Feb 03 10:28:14 crc kubenswrapper[5010]: I0203 10:28:14.661510 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b","Type":"ContainerStarted","Data":"4ba4db9ad461a1c8c1413d0c4001a20f6f253c1f4411549548ef5cb960e4f2f8"} Feb 03 10:28:15 crc kubenswrapper[5010]: I0203 10:28:15.675795 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff","Type":"ContainerStarted","Data":"353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363"} Feb 03 10:28:15 crc kubenswrapper[5010]: I0203 10:28:15.676115 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff","Type":"ContainerStarted","Data":"4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276"} Feb 03 10:28:15 crc kubenswrapper[5010]: I0203 10:28:15.678621 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b","Type":"ContainerStarted","Data":"4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666"} Feb 03 10:28:15 crc kubenswrapper[5010]: I0203 10:28:15.707384 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.707362629 podStartE2EDuration="3.707362629s" podCreationTimestamp="2026-02-03 10:28:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:28:15.698154395 +0000 UTC m=+1565.854130524" watchObservedRunningTime="2026-02-03 10:28:15.707362629 +0000 UTC m=+1565.863338758" Feb 03 10:28:15 crc kubenswrapper[5010]: I0203 10:28:15.813520 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:15 crc kubenswrapper[5010]: I0203 10:28:15.906058 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-x25nd"] Feb 03 10:28:15 crc kubenswrapper[5010]: I0203 10:28:15.906914 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" podUID="55ad6744-8ba2-49c4-bf2c-986f85f40079" containerName="dnsmasq-dns" containerID="cri-o://023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74" gracePeriod=10 Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.397696 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.397749 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.397842 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.398617 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.398661 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" gracePeriod=600 Feb 03 10:28:16 crc kubenswrapper[5010]: E0203 10:28:16.558642 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.586625 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.712179 5010 generic.go:334] "Generic (PLEG): container finished" podID="55ad6744-8ba2-49c4-bf2c-986f85f40079" containerID="023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74" exitCode=0 Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.712373 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" event={"ID":"55ad6744-8ba2-49c4-bf2c-986f85f40079","Type":"ContainerDied","Data":"023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74"} Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.712393 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.712413 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" event={"ID":"55ad6744-8ba2-49c4-bf2c-986f85f40079","Type":"ContainerDied","Data":"7edb2d5b18afc723b6414cab56e64b2430add9e831d1db279a0d0981b7c44bb5"} Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.712432 5010 scope.go:117] "RemoveContainer" containerID="023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.726814 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" exitCode=0 Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.727222 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1"} Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.728086 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:28:16 crc kubenswrapper[5010]: E0203 10:28:16.728338 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.764233 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-ovsdbserver-sb\") pod \"55ad6744-8ba2-49c4-bf2c-986f85f40079\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.764450 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-config\") pod \"55ad6744-8ba2-49c4-bf2c-986f85f40079\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.764488 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-dns-swift-storage-0\") pod \"55ad6744-8ba2-49c4-bf2c-986f85f40079\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.764538 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-ovsdbserver-nb\") pod \"55ad6744-8ba2-49c4-bf2c-986f85f40079\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.764597 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-dns-svc\") pod \"55ad6744-8ba2-49c4-bf2c-986f85f40079\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.764715 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdv6g\" (UniqueName: \"kubernetes.io/projected/55ad6744-8ba2-49c4-bf2c-986f85f40079-kube-api-access-vdv6g\") pod \"55ad6744-8ba2-49c4-bf2c-986f85f40079\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.812663 5010 scope.go:117] "RemoveContainer" containerID="1947217ed252755389b58ec73dafb5c0c5c7fbd1d7f80b6677ba6a66639adb33" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.826533 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55ad6744-8ba2-49c4-bf2c-986f85f40079-kube-api-access-vdv6g" (OuterVolumeSpecName: "kube-api-access-vdv6g") pod "55ad6744-8ba2-49c4-bf2c-986f85f40079" (UID: "55ad6744-8ba2-49c4-bf2c-986f85f40079"). InnerVolumeSpecName "kube-api-access-vdv6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.868035 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdv6g\" (UniqueName: \"kubernetes.io/projected/55ad6744-8ba2-49c4-bf2c-986f85f40079-kube-api-access-vdv6g\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.891874 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "55ad6744-8ba2-49c4-bf2c-986f85f40079" (UID: "55ad6744-8ba2-49c4-bf2c-986f85f40079"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.907537 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-config" (OuterVolumeSpecName: "config") pod "55ad6744-8ba2-49c4-bf2c-986f85f40079" (UID: "55ad6744-8ba2-49c4-bf2c-986f85f40079"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.937680 5010 scope.go:117] "RemoveContainer" containerID="023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.938021 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "55ad6744-8ba2-49c4-bf2c-986f85f40079" (UID: "55ad6744-8ba2-49c4-bf2c-986f85f40079"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.938076 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "55ad6744-8ba2-49c4-bf2c-986f85f40079" (UID: "55ad6744-8ba2-49c4-bf2c-986f85f40079"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:28:16 crc kubenswrapper[5010]: E0203 10:28:16.941806 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74\": container with ID starting with 023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74 not found: ID does not exist" containerID="023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.941856 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74"} err="failed to get container status \"023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74\": rpc error: code = NotFound desc = could not find container \"023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74\": container with ID starting with 023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74 not found: ID does not exist" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.941892 5010 scope.go:117] "RemoveContainer" containerID="1947217ed252755389b58ec73dafb5c0c5c7fbd1d7f80b6677ba6a66639adb33" Feb 03 10:28:16 crc kubenswrapper[5010]: E0203 10:28:16.942508 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1947217ed252755389b58ec73dafb5c0c5c7fbd1d7f80b6677ba6a66639adb33\": container with ID starting with 1947217ed252755389b58ec73dafb5c0c5c7fbd1d7f80b6677ba6a66639adb33 not found: ID does not exist" containerID="1947217ed252755389b58ec73dafb5c0c5c7fbd1d7f80b6677ba6a66639adb33" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.942556 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1947217ed252755389b58ec73dafb5c0c5c7fbd1d7f80b6677ba6a66639adb33"} err="failed to get container status \"1947217ed252755389b58ec73dafb5c0c5c7fbd1d7f80b6677ba6a66639adb33\": rpc error: code = NotFound desc = could not find container \"1947217ed252755389b58ec73dafb5c0c5c7fbd1d7f80b6677ba6a66639adb33\": container with ID starting with 1947217ed252755389b58ec73dafb5c0c5c7fbd1d7f80b6677ba6a66639adb33 not found: ID does not exist" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.942574 5010 scope.go:117] "RemoveContainer" containerID="feb6be59c5f60eb4fb5b49379a30e3d1c2e1212fd73c563908d470b35420da88" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.958942 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "55ad6744-8ba2-49c4-bf2c-986f85f40079" (UID: "55ad6744-8ba2-49c4-bf2c-986f85f40079"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.969504 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.969540 5010 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.969553 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.969562 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.969572 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:17 crc kubenswrapper[5010]: I0203 10:28:17.169163 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-x25nd"] Feb 03 10:28:17 crc kubenswrapper[5010]: I0203 10:28:17.187728 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-x25nd"] Feb 03 10:28:18 crc kubenswrapper[5010]: I0203 10:28:18.515876 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55ad6744-8ba2-49c4-bf2c-986f85f40079" path="/var/lib/kubelet/pods/55ad6744-8ba2-49c4-bf2c-986f85f40079/volumes" Feb 03 10:28:18 crc kubenswrapper[5010]: I0203 10:28:18.754068 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff","Type":"ContainerStarted","Data":"63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48"} Feb 03 10:28:18 crc kubenswrapper[5010]: I0203 10:28:18.754182 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="ceilometer-central-agent" containerID="cri-o://21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e" gracePeriod=30 Feb 03 10:28:18 crc kubenswrapper[5010]: I0203 10:28:18.754247 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 10:28:18 crc kubenswrapper[5010]: I0203 10:28:18.754266 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="proxy-httpd" containerID="cri-o://63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48" gracePeriod=30 Feb 03 10:28:18 crc kubenswrapper[5010]: I0203 10:28:18.754296 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="sg-core" containerID="cri-o://353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363" gracePeriod=30 Feb 03 10:28:18 crc kubenswrapper[5010]: I0203 10:28:18.754276 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="ceilometer-notification-agent" containerID="cri-o://4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276" gracePeriod=30 Feb 03 10:28:18 crc kubenswrapper[5010]: I0203 10:28:18.800135 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.142478042 podStartE2EDuration="8.800116555s" podCreationTimestamp="2026-02-03 10:28:10 +0000 UTC" firstStartedPulling="2026-02-03 10:28:11.897394053 +0000 UTC m=+1562.053370182" lastFinishedPulling="2026-02-03 10:28:17.555032566 +0000 UTC m=+1567.711008695" observedRunningTime="2026-02-03 10:28:18.786955389 +0000 UTC m=+1568.942931518" watchObservedRunningTime="2026-02-03 10:28:18.800116555 +0000 UTC m=+1568.956092684" Feb 03 10:28:19 crc kubenswrapper[5010]: I0203 10:28:19.766143 5010 generic.go:334] "Generic (PLEG): container finished" podID="900a4dd0-c8e2-4416-9a0e-8fff95a5053b" containerID="79dc7129a99144c2e59b3fda9930b79947c9ac7a248d6f8abe7b85572f2f5ea2" exitCode=0 Feb 03 10:28:19 crc kubenswrapper[5010]: I0203 10:28:19.766385 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-fmn8g" event={"ID":"900a4dd0-c8e2-4416-9a0e-8fff95a5053b","Type":"ContainerDied","Data":"79dc7129a99144c2e59b3fda9930b79947c9ac7a248d6f8abe7b85572f2f5ea2"} Feb 03 10:28:19 crc kubenswrapper[5010]: I0203 10:28:19.770866 5010 generic.go:334] "Generic (PLEG): container finished" podID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerID="63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48" exitCode=0 Feb 03 10:28:19 crc kubenswrapper[5010]: I0203 10:28:19.770896 5010 generic.go:334] "Generic (PLEG): container finished" podID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerID="353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363" exitCode=2 Feb 03 10:28:19 crc kubenswrapper[5010]: I0203 10:28:19.770905 5010 generic.go:334] "Generic (PLEG): container finished" podID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerID="4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276" exitCode=0 Feb 03 10:28:19 crc kubenswrapper[5010]: I0203 10:28:19.770969 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff","Type":"ContainerDied","Data":"63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48"} Feb 03 10:28:19 crc kubenswrapper[5010]: I0203 10:28:19.771067 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff","Type":"ContainerDied","Data":"353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363"} Feb 03 10:28:19 crc kubenswrapper[5010]: I0203 10:28:19.771105 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff","Type":"ContainerDied","Data":"4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276"} Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.249885 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.308464 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-config-data\") pod \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.308578 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-scripts\") pod \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.308652 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-combined-ca-bundle\") pod \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.308831 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhn74\" (UniqueName: \"kubernetes.io/projected/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-kube-api-access-hhn74\") pod \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.318670 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-scripts" (OuterVolumeSpecName: "scripts") pod "900a4dd0-c8e2-4416-9a0e-8fff95a5053b" (UID: "900a4dd0-c8e2-4416-9a0e-8fff95a5053b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.320573 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-kube-api-access-hhn74" (OuterVolumeSpecName: "kube-api-access-hhn74") pod "900a4dd0-c8e2-4416-9a0e-8fff95a5053b" (UID: "900a4dd0-c8e2-4416-9a0e-8fff95a5053b"). InnerVolumeSpecName "kube-api-access-hhn74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.360709 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-config-data" (OuterVolumeSpecName: "config-data") pod "900a4dd0-c8e2-4416-9a0e-8fff95a5053b" (UID: "900a4dd0-c8e2-4416-9a0e-8fff95a5053b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.376909 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "900a4dd0-c8e2-4416-9a0e-8fff95a5053b" (UID: "900a4dd0-c8e2-4416-9a0e-8fff95a5053b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.411412 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhn74\" (UniqueName: \"kubernetes.io/projected/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-kube-api-access-hhn74\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.411478 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.411501 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.411519 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.527883 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.616934 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-log-httpd\") pod \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.617023 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-combined-ca-bundle\") pod \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.617078 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-config-data\") pod \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.617184 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-sg-core-conf-yaml\") pod \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.617257 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-ceilometer-tls-certs\") pod \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.617305 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-scripts\") pod \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.617502 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-run-httpd\") pod \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.617612 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5zz8\" (UniqueName: \"kubernetes.io/projected/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-kube-api-access-z5zz8\") pod \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.619285 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" (UID: "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.619640 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" (UID: "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.622370 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-kube-api-access-z5zz8" (OuterVolumeSpecName: "kube-api-access-z5zz8") pod "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" (UID: "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff"). InnerVolumeSpecName "kube-api-access-z5zz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.625556 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-scripts" (OuterVolumeSpecName: "scripts") pod "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" (UID: "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.646532 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" (UID: "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.672058 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" (UID: "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.696441 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" (UID: "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.715688 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-config-data" (OuterVolumeSpecName: "config-data") pod "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" (UID: "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.720282 5010 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.720333 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5zz8\" (UniqueName: \"kubernetes.io/projected/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-kube-api-access-z5zz8\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.720345 5010 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.720354 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.720365 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.720378 5010 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.720390 5010 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.720402 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.798827 5010 generic.go:334] "Generic (PLEG): container finished" podID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerID="21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e" exitCode=0 Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.798943 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff","Type":"ContainerDied","Data":"21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e"} Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.799003 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff","Type":"ContainerDied","Data":"4d55ccaf8e8fbc23ae8d8fcb578bf3c1e898e367f6ccb3f3993272add85d622a"} Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.799029 5010 scope.go:117] "RemoveContainer" containerID="63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.798955 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.804636 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-fmn8g" event={"ID":"900a4dd0-c8e2-4416-9a0e-8fff95a5053b","Type":"ContainerDied","Data":"5e355931a7d8aee1e5fce1e85e08f90a6fc5e4e40c3b64d40ecde61b241ba2a4"} Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.804685 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e355931a7d8aee1e5fce1e85e08f90a6fc5e4e40c3b64d40ecde61b241ba2a4" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.804746 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.847809 5010 scope.go:117] "RemoveContainer" containerID="353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.874234 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.894653 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.897075 5010 scope.go:117] "RemoveContainer" containerID="4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.904849 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:28:21 crc kubenswrapper[5010]: E0203 10:28:21.905582 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55ad6744-8ba2-49c4-bf2c-986f85f40079" containerName="dnsmasq-dns" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.905607 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="55ad6744-8ba2-49c4-bf2c-986f85f40079" containerName="dnsmasq-dns" Feb 03 10:28:21 crc kubenswrapper[5010]: E0203 10:28:21.905627 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="ceilometer-notification-agent" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.905639 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="ceilometer-notification-agent" Feb 03 10:28:21 crc kubenswrapper[5010]: E0203 10:28:21.905652 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="sg-core" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.905661 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="sg-core" Feb 03 10:28:21 crc kubenswrapper[5010]: E0203 10:28:21.905672 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55ad6744-8ba2-49c4-bf2c-986f85f40079" containerName="init" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.905679 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="55ad6744-8ba2-49c4-bf2c-986f85f40079" containerName="init" Feb 03 10:28:21 crc kubenswrapper[5010]: E0203 10:28:21.905690 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900a4dd0-c8e2-4416-9a0e-8fff95a5053b" containerName="nova-manage" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.905696 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="900a4dd0-c8e2-4416-9a0e-8fff95a5053b" containerName="nova-manage" Feb 03 10:28:21 crc kubenswrapper[5010]: E0203 10:28:21.905769 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="ceilometer-central-agent" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.905778 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="ceilometer-central-agent" Feb 03 10:28:21 crc kubenswrapper[5010]: E0203 10:28:21.905790 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="proxy-httpd" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.905796 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="proxy-httpd" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.906055 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="ceilometer-notification-agent" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.906086 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="sg-core" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.906111 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="900a4dd0-c8e2-4416-9a0e-8fff95a5053b" containerName="nova-manage" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.906124 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="ceilometer-central-agent" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.906144 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="55ad6744-8ba2-49c4-bf2c-986f85f40079" containerName="dnsmasq-dns" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.906157 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="proxy-httpd" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.909277 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.912955 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.915923 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.916312 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.917609 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.933062 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-scripts\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.933131 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.933161 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe58e747-c39e-4370-93bc-f72f8c5ee95a-log-httpd\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.933386 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-config-data\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.933473 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.933627 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.933724 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd872\" (UniqueName: \"kubernetes.io/projected/fe58e747-c39e-4370-93bc-f72f8c5ee95a-kube-api-access-cd872\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.933769 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe58e747-c39e-4370-93bc-f72f8c5ee95a-run-httpd\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.954245 5010 scope.go:117] "RemoveContainer" containerID="21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.036075 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.036809 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" containerName="nova-api-log" containerID="cri-o://f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c" gracePeriod=30 Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.037360 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" containerName="nova-api-api" containerID="cri-o://4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666" gracePeriod=30 Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.037415 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-scripts\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.037487 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.037517 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe58e747-c39e-4370-93bc-f72f8c5ee95a-log-httpd\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.037593 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-config-data\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.037658 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.037689 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.037744 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd872\" (UniqueName: \"kubernetes.io/projected/fe58e747-c39e-4370-93bc-f72f8c5ee95a-kube-api-access-cd872\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.037782 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe58e747-c39e-4370-93bc-f72f8c5ee95a-run-httpd\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.038715 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe58e747-c39e-4370-93bc-f72f8c5ee95a-run-httpd\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.039018 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe58e747-c39e-4370-93bc-f72f8c5ee95a-log-httpd\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.045102 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.045102 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.046529 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.051982 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-scripts\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.052688 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-config-data\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.053896 5010 scope.go:117] "RemoveContainer" containerID="63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48" Feb 03 10:28:22 crc kubenswrapper[5010]: E0203 10:28:22.054480 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48\": container with ID starting with 63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48 not found: ID does not exist" containerID="63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.054521 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48"} err="failed to get container status \"63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48\": rpc error: code = NotFound desc = could not find container \"63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48\": container with ID starting with 63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48 not found: ID does not exist" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.054550 5010 scope.go:117] "RemoveContainer" containerID="353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363" Feb 03 10:28:22 crc kubenswrapper[5010]: E0203 10:28:22.055790 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363\": container with ID starting with 353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363 not found: ID does not exist" containerID="353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.055843 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363"} err="failed to get container status \"353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363\": rpc error: code = NotFound desc = could not find container \"353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363\": container with ID starting with 353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363 not found: ID does not exist" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.055881 5010 scope.go:117] "RemoveContainer" containerID="4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276" Feb 03 10:28:22 crc kubenswrapper[5010]: E0203 10:28:22.057504 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276\": container with ID starting with 4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276 not found: ID does not exist" containerID="4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.057587 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276"} err="failed to get container status \"4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276\": rpc error: code = NotFound desc = could not find container \"4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276\": container with ID starting with 4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276 not found: ID does not exist" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.057650 5010 scope.go:117] "RemoveContainer" containerID="21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.065929 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.066363 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a2d836d0-d303-41ca-9c8b-f714d6a4e76c" containerName="nova-scheduler-scheduler" containerID="cri-o://3b3e32798695ef193d14b863df180f74f04391661ad55526322e40cae223bae3" gracePeriod=30 Feb 03 10:28:22 crc kubenswrapper[5010]: E0203 10:28:22.067389 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e\": container with ID starting with 21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e not found: ID does not exist" containerID="21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.067452 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e"} err="failed to get container status \"21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e\": rpc error: code = NotFound desc = could not find container \"21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e\": container with ID starting with 21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e not found: ID does not exist" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.068973 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd872\" (UniqueName: \"kubernetes.io/projected/fe58e747-c39e-4370-93bc-f72f8c5ee95a-kube-api-access-cd872\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.135283 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.136021 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerName="nova-metadata-metadata" containerID="cri-o://a78044c6ee003f2a2c2b9afaa9ab8fb12ae812a98e2ee39a42b2fc304776640e" gracePeriod=30 Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.136347 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerName="nova-metadata-log" containerID="cri-o://70f58e247699be77808ee32bd051173d13561654851dcea2d20478da52e6150e" gracePeriod=30 Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.293835 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.577031 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" path="/var/lib/kubelet/pods/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff/volumes" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.689422 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.812149 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-config-data\") pod \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.812307 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-public-tls-certs\") pod \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.812348 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slhxg\" (UniqueName: \"kubernetes.io/projected/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-kube-api-access-slhxg\") pod \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.812396 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-internal-tls-certs\") pod \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.812451 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-logs\") pod \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.812544 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-combined-ca-bundle\") pod \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.814458 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-logs" (OuterVolumeSpecName: "logs") pod "1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" (UID: "1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.822402 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-kube-api-access-slhxg" (OuterVolumeSpecName: "kube-api-access-slhxg") pod "1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" (UID: "1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b"). InnerVolumeSpecName "kube-api-access-slhxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.841115 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.841287 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b","Type":"ContainerDied","Data":"4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666"} Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.841724 5010 scope.go:117] "RemoveContainer" containerID="4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.840990 5010 generic.go:334] "Generic (PLEG): container finished" podID="1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" containerID="4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666" exitCode=0 Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.842143 5010 generic.go:334] "Generic (PLEG): container finished" podID="1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" containerID="f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c" exitCode=143 Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.842279 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b","Type":"ContainerDied","Data":"f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c"} Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.842414 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b","Type":"ContainerDied","Data":"4ba4db9ad461a1c8c1413d0c4001a20f6f253c1f4411549548ef5cb960e4f2f8"} Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.854539 5010 generic.go:334] "Generic (PLEG): container finished" podID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerID="70f58e247699be77808ee32bd051173d13561654851dcea2d20478da52e6150e" exitCode=143 Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.854598 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c43ac79-0458-4b95-a9fd-26bc038c195b","Type":"ContainerDied","Data":"70f58e247699be77808ee32bd051173d13561654851dcea2d20478da52e6150e"} Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.856862 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" (UID: "1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.879607 5010 scope.go:117] "RemoveContainer" containerID="f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.884992 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-config-data" (OuterVolumeSpecName: "config-data") pod "1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" (UID: "1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.885625 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" (UID: "1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.902588 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" (UID: "1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.904506 5010 scope.go:117] "RemoveContainer" containerID="4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666" Feb 03 10:28:22 crc kubenswrapper[5010]: E0203 10:28:22.906020 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666\": container with ID starting with 4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666 not found: ID does not exist" containerID="4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.906110 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666"} err="failed to get container status \"4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666\": rpc error: code = NotFound desc = could not find container \"4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666\": container with ID starting with 4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666 not found: ID does not exist" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.906147 5010 scope.go:117] "RemoveContainer" containerID="f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c" Feb 03 10:28:22 crc kubenswrapper[5010]: E0203 10:28:22.906482 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c\": container with ID starting with f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c not found: ID does not exist" containerID="f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.906517 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c"} err="failed to get container status \"f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c\": rpc error: code = NotFound desc = could not find container \"f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c\": container with ID starting with f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c not found: ID does not exist" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.906538 5010 scope.go:117] "RemoveContainer" containerID="4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.906860 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666"} err="failed to get container status \"4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666\": rpc error: code = NotFound desc = could not find container \"4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666\": container with ID starting with 4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666 not found: ID does not exist" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.906887 5010 scope.go:117] "RemoveContainer" containerID="f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.907151 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c"} err="failed to get container status \"f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c\": rpc error: code = NotFound desc = could not find container \"f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c\": container with ID starting with f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c not found: ID does not exist" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.916622 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.916676 5010 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.916691 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slhxg\" (UniqueName: \"kubernetes.io/projected/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-kube-api-access-slhxg\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.916703 5010 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.916725 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.916740 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.035858 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:28:23 crc kubenswrapper[5010]: W0203 10:28:23.037362 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe58e747_c39e_4370_93bc_f72f8c5ee95a.slice/crio-66cb3129fc03dffd78ff3ec6bfe9112c6f1b13c3583329999e822cc839867080 WatchSource:0}: Error finding container 66cb3129fc03dffd78ff3ec6bfe9112c6f1b13c3583329999e822cc839867080: Status 404 returned error can't find the container with id 66cb3129fc03dffd78ff3ec6bfe9112c6f1b13c3583329999e822cc839867080 Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.179026 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.197067 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.221814 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 03 10:28:23 crc kubenswrapper[5010]: E0203 10:28:23.222449 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" containerName="nova-api-log" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.222472 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" containerName="nova-api-log" Feb 03 10:28:23 crc kubenswrapper[5010]: E0203 10:28:23.222517 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" containerName="nova-api-api" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.222525 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" containerName="nova-api-api" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.222744 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" containerName="nova-api-log" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.222782 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" containerName="nova-api-api" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.224249 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.230816 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.235834 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.238429 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.239451 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:28:23 crc kubenswrapper[5010]: E0203 10:28:23.274268 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b3e32798695ef193d14b863df180f74f04391661ad55526322e40cae223bae3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 10:28:23 crc kubenswrapper[5010]: E0203 10:28:23.283509 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b3e32798695ef193d14b863df180f74f04391661ad55526322e40cae223bae3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 10:28:23 crc kubenswrapper[5010]: E0203 10:28:23.286387 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b3e32798695ef193d14b863df180f74f04391661ad55526322e40cae223bae3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 10:28:23 crc kubenswrapper[5010]: E0203 10:28:23.286453 5010 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="a2d836d0-d303-41ca-9c8b-f714d6a4e76c" containerName="nova-scheduler-scheduler" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.327954 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aba2689d-cd13-4601-ac45-69409c411839-config-data\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.328024 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn84h\" (UniqueName: \"kubernetes.io/projected/aba2689d-cd13-4601-ac45-69409c411839-kube-api-access-sn84h\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.328085 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aba2689d-cd13-4601-ac45-69409c411839-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.328118 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aba2689d-cd13-4601-ac45-69409c411839-public-tls-certs\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.328401 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aba2689d-cd13-4601-ac45-69409c411839-internal-tls-certs\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.328871 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aba2689d-cd13-4601-ac45-69409c411839-logs\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.439280 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aba2689d-cd13-4601-ac45-69409c411839-internal-tls-certs\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.439401 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aba2689d-cd13-4601-ac45-69409c411839-logs\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.439455 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aba2689d-cd13-4601-ac45-69409c411839-config-data\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.439478 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn84h\" (UniqueName: \"kubernetes.io/projected/aba2689d-cd13-4601-ac45-69409c411839-kube-api-access-sn84h\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.439512 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aba2689d-cd13-4601-ac45-69409c411839-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.439537 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aba2689d-cd13-4601-ac45-69409c411839-public-tls-certs\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.440827 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aba2689d-cd13-4601-ac45-69409c411839-logs\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.455312 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aba2689d-cd13-4601-ac45-69409c411839-public-tls-certs\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.462896 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn84h\" (UniqueName: \"kubernetes.io/projected/aba2689d-cd13-4601-ac45-69409c411839-kube-api-access-sn84h\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.465023 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aba2689d-cd13-4601-ac45-69409c411839-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.465196 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aba2689d-cd13-4601-ac45-69409c411839-internal-tls-certs\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.487394 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aba2689d-cd13-4601-ac45-69409c411839-config-data\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.570348 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.882677 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe58e747-c39e-4370-93bc-f72f8c5ee95a","Type":"ContainerStarted","Data":"66cb3129fc03dffd78ff3ec6bfe9112c6f1b13c3583329999e822cc839867080"} Feb 03 10:28:24 crc kubenswrapper[5010]: I0203 10:28:24.536483 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" path="/var/lib/kubelet/pods/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b/volumes" Feb 03 10:28:24 crc kubenswrapper[5010]: W0203 10:28:24.582607 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaba2689d_cd13_4601_ac45_69409c411839.slice/crio-96678b45cdfbb1ead44162e62acb7726902eb1ffd62d471a3b3d56338399f5b2 WatchSource:0}: Error finding container 96678b45cdfbb1ead44162e62acb7726902eb1ffd62d471a3b3d56338399f5b2: Status 404 returned error can't find the container with id 96678b45cdfbb1ead44162e62acb7726902eb1ffd62d471a3b3d56338399f5b2 Feb 03 10:28:24 crc kubenswrapper[5010]: I0203 10:28:24.591143 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:28:24 crc kubenswrapper[5010]: I0203 10:28:24.903233 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe58e747-c39e-4370-93bc-f72f8c5ee95a","Type":"ContainerStarted","Data":"3fbbf425d6a8ae69a735e172d0f5fc3d55f7bb760d5fa7d006ec36b95d816215"} Feb 03 10:28:24 crc kubenswrapper[5010]: I0203 10:28:24.905872 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aba2689d-cd13-4601-ac45-69409c411839","Type":"ContainerStarted","Data":"bada7cce176643549ba1bc1cf410273f97a38e5aef52492efb83cb84621b729d"} Feb 03 10:28:24 crc kubenswrapper[5010]: I0203 10:28:24.905944 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aba2689d-cd13-4601-ac45-69409c411839","Type":"ContainerStarted","Data":"96678b45cdfbb1ead44162e62acb7726902eb1ffd62d471a3b3d56338399f5b2"} Feb 03 10:28:24 crc kubenswrapper[5010]: I0203 10:28:24.911178 5010 generic.go:334] "Generic (PLEG): container finished" podID="a2d836d0-d303-41ca-9c8b-f714d6a4e76c" containerID="3b3e32798695ef193d14b863df180f74f04391661ad55526322e40cae223bae3" exitCode=0 Feb 03 10:28:24 crc kubenswrapper[5010]: I0203 10:28:24.911344 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a2d836d0-d303-41ca-9c8b-f714d6a4e76c","Type":"ContainerDied","Data":"3b3e32798695ef193d14b863df180f74f04391661ad55526322e40cae223bae3"} Feb 03 10:28:24 crc kubenswrapper[5010]: I0203 10:28:24.922971 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.085478 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-config-data\") pod \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\" (UID: \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\") " Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.086294 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-combined-ca-bundle\") pod \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\" (UID: \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\") " Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.086444 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6chss\" (UniqueName: \"kubernetes.io/projected/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-kube-api-access-6chss\") pod \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\" (UID: \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\") " Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.094718 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-kube-api-access-6chss" (OuterVolumeSpecName: "kube-api-access-6chss") pod "a2d836d0-d303-41ca-9c8b-f714d6a4e76c" (UID: "a2d836d0-d303-41ca-9c8b-f714d6a4e76c"). InnerVolumeSpecName "kube-api-access-6chss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.127415 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2d836d0-d303-41ca-9c8b-f714d6a4e76c" (UID: "a2d836d0-d303-41ca-9c8b-f714d6a4e76c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.127471 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-config-data" (OuterVolumeSpecName: "config-data") pod "a2d836d0-d303-41ca-9c8b-f714d6a4e76c" (UID: "a2d836d0-d303-41ca-9c8b-f714d6a4e76c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.189550 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.189585 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.189597 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6chss\" (UniqueName: \"kubernetes.io/projected/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-kube-api-access-6chss\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.710417 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": read tcp 10.217.0.2:51018->10.217.0.192:8775: read: connection reset by peer" Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.710455 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": read tcp 10.217.0.2:51030->10.217.0.192:8775: read: connection reset by peer" Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.990472 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe58e747-c39e-4370-93bc-f72f8c5ee95a","Type":"ContainerStarted","Data":"b673fa7a85d4061e235a60332d266e4ae0d06383842372e25f038dfe5add4f5b"} Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.990910 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe58e747-c39e-4370-93bc-f72f8c5ee95a","Type":"ContainerStarted","Data":"30a71c784b7a9ba1d4aaa61721c0c5204c9023396080748a48ec3a5135045f10"} Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.011738 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aba2689d-cd13-4601-ac45-69409c411839","Type":"ContainerStarted","Data":"983a5c24c4d341cce56231a45d3dc293050162227992ac74a4484151faa42ffe"} Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.017367 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a2d836d0-d303-41ca-9c8b-f714d6a4e76c","Type":"ContainerDied","Data":"58f162aa3d6e537665ac2963288a9914168137aa741e22132f9fea00cc29574c"} Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.017432 5010 scope.go:117] "RemoveContainer" containerID="3b3e32798695ef193d14b863df180f74f04391661ad55526322e40cae223bae3" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.017709 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.023089 5010 generic.go:334] "Generic (PLEG): container finished" podID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerID="a78044c6ee003f2a2c2b9afaa9ab8fb12ae812a98e2ee39a42b2fc304776640e" exitCode=0 Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.023138 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c43ac79-0458-4b95-a9fd-26bc038c195b","Type":"ContainerDied","Data":"a78044c6ee003f2a2c2b9afaa9ab8fb12ae812a98e2ee39a42b2fc304776640e"} Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.088143 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.088116648 podStartE2EDuration="3.088116648s" podCreationTimestamp="2026-02-03 10:28:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:28:26.043874869 +0000 UTC m=+1576.199851018" watchObservedRunningTime="2026-02-03 10:28:26.088116648 +0000 UTC m=+1576.244092777" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.119672 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.136139 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.149409 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.153423 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:28:26 crc kubenswrapper[5010]: E0203 10:28:26.153796 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerName="nova-metadata-log" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.153807 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerName="nova-metadata-log" Feb 03 10:28:26 crc kubenswrapper[5010]: E0203 10:28:26.153823 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerName="nova-metadata-metadata" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.153829 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerName="nova-metadata-metadata" Feb 03 10:28:26 crc kubenswrapper[5010]: E0203 10:28:26.153843 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2d836d0-d303-41ca-9c8b-f714d6a4e76c" containerName="nova-scheduler-scheduler" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.153851 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2d836d0-d303-41ca-9c8b-f714d6a4e76c" containerName="nova-scheduler-scheduler" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.154046 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerName="nova-metadata-log" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.154062 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerName="nova-metadata-metadata" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.154073 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2d836d0-d303-41ca-9c8b-f714d6a4e76c" containerName="nova-scheduler-scheduler" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.156620 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.159421 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.169151 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.231400 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-config-data\") pod \"4c43ac79-0458-4b95-a9fd-26bc038c195b\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.231720 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c43ac79-0458-4b95-a9fd-26bc038c195b-logs\") pod \"4c43ac79-0458-4b95-a9fd-26bc038c195b\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.231789 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-nova-metadata-tls-certs\") pod \"4c43ac79-0458-4b95-a9fd-26bc038c195b\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.231823 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-combined-ca-bundle\") pod \"4c43ac79-0458-4b95-a9fd-26bc038c195b\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.232172 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bnxb\" (UniqueName: \"kubernetes.io/projected/4c43ac79-0458-4b95-a9fd-26bc038c195b-kube-api-access-9bnxb\") pod \"4c43ac79-0458-4b95-a9fd-26bc038c195b\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.232711 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrm6k\" (UniqueName: \"kubernetes.io/projected/28559aae-4731-4653-a466-8c6f5c6c7dcf-kube-api-access-vrm6k\") pod \"nova-scheduler-0\" (UID: \"28559aae-4731-4653-a466-8c6f5c6c7dcf\") " pod="openstack/nova-scheduler-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.232862 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28559aae-4731-4653-a466-8c6f5c6c7dcf-config-data\") pod \"nova-scheduler-0\" (UID: \"28559aae-4731-4653-a466-8c6f5c6c7dcf\") " pod="openstack/nova-scheduler-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.232924 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28559aae-4731-4653-a466-8c6f5c6c7dcf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"28559aae-4731-4653-a466-8c6f5c6c7dcf\") " pod="openstack/nova-scheduler-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.249092 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c43ac79-0458-4b95-a9fd-26bc038c195b-logs" (OuterVolumeSpecName: "logs") pod "4c43ac79-0458-4b95-a9fd-26bc038c195b" (UID: "4c43ac79-0458-4b95-a9fd-26bc038c195b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.277481 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c43ac79-0458-4b95-a9fd-26bc038c195b-kube-api-access-9bnxb" (OuterVolumeSpecName: "kube-api-access-9bnxb") pod "4c43ac79-0458-4b95-a9fd-26bc038c195b" (UID: "4c43ac79-0458-4b95-a9fd-26bc038c195b"). InnerVolumeSpecName "kube-api-access-9bnxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.305682 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c43ac79-0458-4b95-a9fd-26bc038c195b" (UID: "4c43ac79-0458-4b95-a9fd-26bc038c195b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.316754 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4c43ac79-0458-4b95-a9fd-26bc038c195b" (UID: "4c43ac79-0458-4b95-a9fd-26bc038c195b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.335009 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrm6k\" (UniqueName: \"kubernetes.io/projected/28559aae-4731-4653-a466-8c6f5c6c7dcf-kube-api-access-vrm6k\") pod \"nova-scheduler-0\" (UID: \"28559aae-4731-4653-a466-8c6f5c6c7dcf\") " pod="openstack/nova-scheduler-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.335123 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28559aae-4731-4653-a466-8c6f5c6c7dcf-config-data\") pod \"nova-scheduler-0\" (UID: \"28559aae-4731-4653-a466-8c6f5c6c7dcf\") " pod="openstack/nova-scheduler-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.335173 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28559aae-4731-4653-a466-8c6f5c6c7dcf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"28559aae-4731-4653-a466-8c6f5c6c7dcf\") " pod="openstack/nova-scheduler-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.335247 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bnxb\" (UniqueName: \"kubernetes.io/projected/4c43ac79-0458-4b95-a9fd-26bc038c195b-kube-api-access-9bnxb\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.335261 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c43ac79-0458-4b95-a9fd-26bc038c195b-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.335272 5010 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.335284 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.344119 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28559aae-4731-4653-a466-8c6f5c6c7dcf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"28559aae-4731-4653-a466-8c6f5c6c7dcf\") " pod="openstack/nova-scheduler-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.345720 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28559aae-4731-4653-a466-8c6f5c6c7dcf-config-data\") pod \"nova-scheduler-0\" (UID: \"28559aae-4731-4653-a466-8c6f5c6c7dcf\") " pod="openstack/nova-scheduler-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.356431 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-config-data" (OuterVolumeSpecName: "config-data") pod "4c43ac79-0458-4b95-a9fd-26bc038c195b" (UID: "4c43ac79-0458-4b95-a9fd-26bc038c195b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.358009 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrm6k\" (UniqueName: \"kubernetes.io/projected/28559aae-4731-4653-a466-8c6f5c6c7dcf-kube-api-access-vrm6k\") pod \"nova-scheduler-0\" (UID: \"28559aae-4731-4653-a466-8c6f5c6c7dcf\") " pod="openstack/nova-scheduler-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.436705 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.437946 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.514288 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2d836d0-d303-41ca-9c8b-f714d6a4e76c" path="/var/lib/kubelet/pods/a2d836d0-d303-41ca-9c8b-f714d6a4e76c/volumes" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.966749 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:28:26 crc kubenswrapper[5010]: W0203 10:28:26.977603 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28559aae_4731_4653_a466_8c6f5c6c7dcf.slice/crio-2ed910c827770743af4ba77485f94924ae732d9b7ebf8412c33571e414d0961c WatchSource:0}: Error finding container 2ed910c827770743af4ba77485f94924ae732d9b7ebf8412c33571e414d0961c: Status 404 returned error can't find the container with id 2ed910c827770743af4ba77485f94924ae732d9b7ebf8412c33571e414d0961c Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.033744 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"28559aae-4731-4653-a466-8c6f5c6c7dcf","Type":"ContainerStarted","Data":"2ed910c827770743af4ba77485f94924ae732d9b7ebf8412c33571e414d0961c"} Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.038529 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.038997 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c43ac79-0458-4b95-a9fd-26bc038c195b","Type":"ContainerDied","Data":"d8c29f4fa62c3f6d24562331b8a0ba99f0c35f78468e992ff282bcdb95f55c82"} Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.039027 5010 scope.go:117] "RemoveContainer" containerID="a78044c6ee003f2a2c2b9afaa9ab8fb12ae812a98e2ee39a42b2fc304776640e" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.096440 5010 scope.go:117] "RemoveContainer" containerID="70f58e247699be77808ee32bd051173d13561654851dcea2d20478da52e6150e" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.112267 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.129971 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.145796 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.147426 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.153805 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.154094 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.230724 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.257909 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edaaf3a7-a254-4a29-875a-643e46308f33-logs\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.257993 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk4tq\" (UniqueName: \"kubernetes.io/projected/edaaf3a7-a254-4a29-875a-643e46308f33-kube-api-access-nk4tq\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.258027 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edaaf3a7-a254-4a29-875a-643e46308f33-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.258075 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/edaaf3a7-a254-4a29-875a-643e46308f33-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.258112 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edaaf3a7-a254-4a29-875a-643e46308f33-config-data\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.360731 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk4tq\" (UniqueName: \"kubernetes.io/projected/edaaf3a7-a254-4a29-875a-643e46308f33-kube-api-access-nk4tq\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.360846 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edaaf3a7-a254-4a29-875a-643e46308f33-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.360917 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/edaaf3a7-a254-4a29-875a-643e46308f33-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.360981 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edaaf3a7-a254-4a29-875a-643e46308f33-config-data\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.361052 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edaaf3a7-a254-4a29-875a-643e46308f33-logs\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.361749 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edaaf3a7-a254-4a29-875a-643e46308f33-logs\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.367687 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/edaaf3a7-a254-4a29-875a-643e46308f33-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.371101 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edaaf3a7-a254-4a29-875a-643e46308f33-config-data\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.371813 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edaaf3a7-a254-4a29-875a-643e46308f33-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.382822 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk4tq\" (UniqueName: \"kubernetes.io/projected/edaaf3a7-a254-4a29-875a-643e46308f33-kube-api-access-nk4tq\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.621495 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:28:28 crc kubenswrapper[5010]: I0203 10:28:28.048949 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"28559aae-4731-4653-a466-8c6f5c6c7dcf","Type":"ContainerStarted","Data":"13ad0ac55357133529dbef7213e34a9655d73d32b0305d790f3ed0e0bc454043"} Feb 03 10:28:28 crc kubenswrapper[5010]: I0203 10:28:28.051364 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe58e747-c39e-4370-93bc-f72f8c5ee95a","Type":"ContainerStarted","Data":"7dfe01dd5b5df071335a047adc00fd893f119b00593473ed5caf709c9b6193a5"} Feb 03 10:28:28 crc kubenswrapper[5010]: I0203 10:28:28.051601 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 10:28:28 crc kubenswrapper[5010]: I0203 10:28:28.071325 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.071307422 podStartE2EDuration="2.071307422s" podCreationTimestamp="2026-02-03 10:28:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:28:28.070562273 +0000 UTC m=+1578.226538412" watchObservedRunningTime="2026-02-03 10:28:28.071307422 +0000 UTC m=+1578.227283551" Feb 03 10:28:28 crc kubenswrapper[5010]: I0203 10:28:28.098187 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.545258234 podStartE2EDuration="7.098165697s" podCreationTimestamp="2026-02-03 10:28:21 +0000 UTC" firstStartedPulling="2026-02-03 10:28:23.041144411 +0000 UTC m=+1573.197120540" lastFinishedPulling="2026-02-03 10:28:27.594051874 +0000 UTC m=+1577.750028003" observedRunningTime="2026-02-03 10:28:28.092602216 +0000 UTC m=+1578.248578345" watchObservedRunningTime="2026-02-03 10:28:28.098165697 +0000 UTC m=+1578.254141816" Feb 03 10:28:28 crc kubenswrapper[5010]: W0203 10:28:28.139057 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedaaf3a7_a254_4a29_875a_643e46308f33.slice/crio-9a7d436f32cd314ad8bbb1fc0c1318b84815558ee4edee486c8a74bfc949d94b WatchSource:0}: Error finding container 9a7d436f32cd314ad8bbb1fc0c1318b84815558ee4edee486c8a74bfc949d94b: Status 404 returned error can't find the container with id 9a7d436f32cd314ad8bbb1fc0c1318b84815558ee4edee486c8a74bfc949d94b Feb 03 10:28:28 crc kubenswrapper[5010]: I0203 10:28:28.154882 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:28:28 crc kubenswrapper[5010]: I0203 10:28:28.503772 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:28:28 crc kubenswrapper[5010]: E0203 10:28:28.504496 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:28:28 crc kubenswrapper[5010]: I0203 10:28:28.522247 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" path="/var/lib/kubelet/pods/4c43ac79-0458-4b95-a9fd-26bc038c195b/volumes" Feb 03 10:28:29 crc kubenswrapper[5010]: I0203 10:28:29.082022 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"edaaf3a7-a254-4a29-875a-643e46308f33","Type":"ContainerStarted","Data":"34dd5978c6ddc33c553961ffbbc90db6cb3ce288fd9e042a9a3a0ee007729c5e"} Feb 03 10:28:29 crc kubenswrapper[5010]: I0203 10:28:29.082080 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"edaaf3a7-a254-4a29-875a-643e46308f33","Type":"ContainerStarted","Data":"76f162e7ffb118a37fd9f58b414f239a530d0e86b8704d45fb9a481cedb91f2c"} Feb 03 10:28:29 crc kubenswrapper[5010]: I0203 10:28:29.082098 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"edaaf3a7-a254-4a29-875a-643e46308f33","Type":"ContainerStarted","Data":"9a7d436f32cd314ad8bbb1fc0c1318b84815558ee4edee486c8a74bfc949d94b"} Feb 03 10:28:29 crc kubenswrapper[5010]: I0203 10:28:29.112320 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.112299334 podStartE2EDuration="2.112299334s" podCreationTimestamp="2026-02-03 10:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:28:29.100978645 +0000 UTC m=+1579.256954784" watchObservedRunningTime="2026-02-03 10:28:29.112299334 +0000 UTC m=+1579.268275473" Feb 03 10:28:31 crc kubenswrapper[5010]: I0203 10:28:31.438707 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 03 10:28:32 crc kubenswrapper[5010]: I0203 10:28:32.621974 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 10:28:32 crc kubenswrapper[5010]: I0203 10:28:32.622093 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 10:28:33 crc kubenswrapper[5010]: I0203 10:28:33.571431 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 10:28:33 crc kubenswrapper[5010]: I0203 10:28:33.571556 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 10:28:34 crc kubenswrapper[5010]: I0203 10:28:34.585465 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aba2689d-cd13-4601-ac45-69409c411839" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 10:28:34 crc kubenswrapper[5010]: I0203 10:28:34.585621 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aba2689d-cd13-4601-ac45-69409c411839" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 10:28:34 crc kubenswrapper[5010]: I0203 10:28:34.778786 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sg4lc"] Feb 03 10:28:34 crc kubenswrapper[5010]: I0203 10:28:34.781011 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:34 crc kubenswrapper[5010]: I0203 10:28:34.788671 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sg4lc"] Feb 03 10:28:34 crc kubenswrapper[5010]: I0203 10:28:34.918967 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5185b2c5-d115-4546-afcf-bc17a00a6cda-catalog-content\") pod \"redhat-operators-sg4lc\" (UID: \"5185b2c5-d115-4546-afcf-bc17a00a6cda\") " pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:34 crc kubenswrapper[5010]: I0203 10:28:34.919029 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5185b2c5-d115-4546-afcf-bc17a00a6cda-utilities\") pod \"redhat-operators-sg4lc\" (UID: \"5185b2c5-d115-4546-afcf-bc17a00a6cda\") " pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:34 crc kubenswrapper[5010]: I0203 10:28:34.919061 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqw52\" (UniqueName: \"kubernetes.io/projected/5185b2c5-d115-4546-afcf-bc17a00a6cda-kube-api-access-lqw52\") pod \"redhat-operators-sg4lc\" (UID: \"5185b2c5-d115-4546-afcf-bc17a00a6cda\") " pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:35 crc kubenswrapper[5010]: I0203 10:28:35.021039 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5185b2c5-d115-4546-afcf-bc17a00a6cda-catalog-content\") pod \"redhat-operators-sg4lc\" (UID: \"5185b2c5-d115-4546-afcf-bc17a00a6cda\") " pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:35 crc kubenswrapper[5010]: I0203 10:28:35.021096 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5185b2c5-d115-4546-afcf-bc17a00a6cda-utilities\") pod \"redhat-operators-sg4lc\" (UID: \"5185b2c5-d115-4546-afcf-bc17a00a6cda\") " pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:35 crc kubenswrapper[5010]: I0203 10:28:35.021126 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqw52\" (UniqueName: \"kubernetes.io/projected/5185b2c5-d115-4546-afcf-bc17a00a6cda-kube-api-access-lqw52\") pod \"redhat-operators-sg4lc\" (UID: \"5185b2c5-d115-4546-afcf-bc17a00a6cda\") " pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:35 crc kubenswrapper[5010]: I0203 10:28:35.021748 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5185b2c5-d115-4546-afcf-bc17a00a6cda-catalog-content\") pod \"redhat-operators-sg4lc\" (UID: \"5185b2c5-d115-4546-afcf-bc17a00a6cda\") " pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:35 crc kubenswrapper[5010]: I0203 10:28:35.021797 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5185b2c5-d115-4546-afcf-bc17a00a6cda-utilities\") pod \"redhat-operators-sg4lc\" (UID: \"5185b2c5-d115-4546-afcf-bc17a00a6cda\") " pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:35 crc kubenswrapper[5010]: I0203 10:28:35.051486 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqw52\" (UniqueName: \"kubernetes.io/projected/5185b2c5-d115-4546-afcf-bc17a00a6cda-kube-api-access-lqw52\") pod \"redhat-operators-sg4lc\" (UID: \"5185b2c5-d115-4546-afcf-bc17a00a6cda\") " pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:35 crc kubenswrapper[5010]: I0203 10:28:35.107602 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:35 crc kubenswrapper[5010]: I0203 10:28:35.637838 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sg4lc"] Feb 03 10:28:36 crc kubenswrapper[5010]: I0203 10:28:36.165661 5010 generic.go:334] "Generic (PLEG): container finished" podID="5185b2c5-d115-4546-afcf-bc17a00a6cda" containerID="a872397b7968be8c4ffd262a8deea4f4c66a360b3a087a92e88a40e32c031cf4" exitCode=0 Feb 03 10:28:36 crc kubenswrapper[5010]: I0203 10:28:36.165719 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg4lc" event={"ID":"5185b2c5-d115-4546-afcf-bc17a00a6cda","Type":"ContainerDied","Data":"a872397b7968be8c4ffd262a8deea4f4c66a360b3a087a92e88a40e32c031cf4"} Feb 03 10:28:36 crc kubenswrapper[5010]: I0203 10:28:36.165754 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg4lc" event={"ID":"5185b2c5-d115-4546-afcf-bc17a00a6cda","Type":"ContainerStarted","Data":"e1eaf28060cc636ff36317d3b149bb856ce747051158d19fd1ca2f7260aa8e45"} Feb 03 10:28:36 crc kubenswrapper[5010]: I0203 10:28:36.438164 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 03 10:28:36 crc kubenswrapper[5010]: I0203 10:28:36.467549 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 03 10:28:37 crc kubenswrapper[5010]: I0203 10:28:37.178399 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg4lc" event={"ID":"5185b2c5-d115-4546-afcf-bc17a00a6cda","Type":"ContainerStarted","Data":"856eada0db222e1896dcc1f7b3ea89a80e570a61b2928200b50eca62149213eb"} Feb 03 10:28:37 crc kubenswrapper[5010]: I0203 10:28:37.219848 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 03 10:28:37 crc kubenswrapper[5010]: I0203 10:28:37.622052 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 03 10:28:37 crc kubenswrapper[5010]: I0203 10:28:37.622101 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 03 10:28:38 crc kubenswrapper[5010]: I0203 10:28:38.193547 5010 generic.go:334] "Generic (PLEG): container finished" podID="5185b2c5-d115-4546-afcf-bc17a00a6cda" containerID="856eada0db222e1896dcc1f7b3ea89a80e570a61b2928200b50eca62149213eb" exitCode=0 Feb 03 10:28:38 crc kubenswrapper[5010]: I0203 10:28:38.193607 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg4lc" event={"ID":"5185b2c5-d115-4546-afcf-bc17a00a6cda","Type":"ContainerDied","Data":"856eada0db222e1896dcc1f7b3ea89a80e570a61b2928200b50eca62149213eb"} Feb 03 10:28:38 crc kubenswrapper[5010]: I0203 10:28:38.637430 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="edaaf3a7-a254-4a29-875a-643e46308f33" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 10:28:38 crc kubenswrapper[5010]: I0203 10:28:38.638079 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="edaaf3a7-a254-4a29-875a-643e46308f33" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 10:28:39 crc kubenswrapper[5010]: I0203 10:28:39.208455 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg4lc" event={"ID":"5185b2c5-d115-4546-afcf-bc17a00a6cda","Type":"ContainerStarted","Data":"3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee"} Feb 03 10:28:39 crc kubenswrapper[5010]: I0203 10:28:39.355781 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sg4lc" podStartSLOduration=2.951509091 podStartE2EDuration="5.35575078s" podCreationTimestamp="2026-02-03 10:28:34 +0000 UTC" firstStartedPulling="2026-02-03 10:28:36.16762433 +0000 UTC m=+1586.323600459" lastFinishedPulling="2026-02-03 10:28:38.571866019 +0000 UTC m=+1588.727842148" observedRunningTime="2026-02-03 10:28:39.328051343 +0000 UTC m=+1589.484027472" watchObservedRunningTime="2026-02-03 10:28:39.35575078 +0000 UTC m=+1589.511726919" Feb 03 10:28:41 crc kubenswrapper[5010]: I0203 10:28:41.502936 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:28:41 crc kubenswrapper[5010]: E0203 10:28:41.503921 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:28:43 crc kubenswrapper[5010]: I0203 10:28:43.577661 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 03 10:28:43 crc kubenswrapper[5010]: I0203 10:28:43.578136 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 10:28:43 crc kubenswrapper[5010]: I0203 10:28:43.579896 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 03 10:28:43 crc kubenswrapper[5010]: I0203 10:28:43.584570 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 03 10:28:44 crc kubenswrapper[5010]: I0203 10:28:44.250448 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 10:28:44 crc kubenswrapper[5010]: I0203 10:28:44.256740 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 03 10:28:45 crc kubenswrapper[5010]: I0203 10:28:45.108975 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:45 crc kubenswrapper[5010]: I0203 10:28:45.110325 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:45 crc kubenswrapper[5010]: I0203 10:28:45.164017 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:45 crc kubenswrapper[5010]: I0203 10:28:45.306762 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:45 crc kubenswrapper[5010]: I0203 10:28:45.396962 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sg4lc"] Feb 03 10:28:47 crc kubenswrapper[5010]: I0203 10:28:47.276788 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sg4lc" podUID="5185b2c5-d115-4546-afcf-bc17a00a6cda" containerName="registry-server" containerID="cri-o://3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee" gracePeriod=2 Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:47.630999 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:47.632350 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:47.648719 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:47.856735 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.009129 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqw52\" (UniqueName: \"kubernetes.io/projected/5185b2c5-d115-4546-afcf-bc17a00a6cda-kube-api-access-lqw52\") pod \"5185b2c5-d115-4546-afcf-bc17a00a6cda\" (UID: \"5185b2c5-d115-4546-afcf-bc17a00a6cda\") " Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.009252 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5185b2c5-d115-4546-afcf-bc17a00a6cda-catalog-content\") pod \"5185b2c5-d115-4546-afcf-bc17a00a6cda\" (UID: \"5185b2c5-d115-4546-afcf-bc17a00a6cda\") " Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.009344 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5185b2c5-d115-4546-afcf-bc17a00a6cda-utilities\") pod \"5185b2c5-d115-4546-afcf-bc17a00a6cda\" (UID: \"5185b2c5-d115-4546-afcf-bc17a00a6cda\") " Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.011590 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5185b2c5-d115-4546-afcf-bc17a00a6cda-utilities" (OuterVolumeSpecName: "utilities") pod "5185b2c5-d115-4546-afcf-bc17a00a6cda" (UID: "5185b2c5-d115-4546-afcf-bc17a00a6cda"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.028109 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5185b2c5-d115-4546-afcf-bc17a00a6cda-kube-api-access-lqw52" (OuterVolumeSpecName: "kube-api-access-lqw52") pod "5185b2c5-d115-4546-afcf-bc17a00a6cda" (UID: "5185b2c5-d115-4546-afcf-bc17a00a6cda"). InnerVolumeSpecName "kube-api-access-lqw52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.112662 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqw52\" (UniqueName: \"kubernetes.io/projected/5185b2c5-d115-4546-afcf-bc17a00a6cda-kube-api-access-lqw52\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.112711 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5185b2c5-d115-4546-afcf-bc17a00a6cda-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.165741 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5185b2c5-d115-4546-afcf-bc17a00a6cda-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5185b2c5-d115-4546-afcf-bc17a00a6cda" (UID: "5185b2c5-d115-4546-afcf-bc17a00a6cda"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.214548 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5185b2c5-d115-4546-afcf-bc17a00a6cda-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.289286 5010 generic.go:334] "Generic (PLEG): container finished" podID="5185b2c5-d115-4546-afcf-bc17a00a6cda" containerID="3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee" exitCode=0 Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.289420 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.289436 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg4lc" event={"ID":"5185b2c5-d115-4546-afcf-bc17a00a6cda","Type":"ContainerDied","Data":"3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee"} Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.291886 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg4lc" event={"ID":"5185b2c5-d115-4546-afcf-bc17a00a6cda","Type":"ContainerDied","Data":"e1eaf28060cc636ff36317d3b149bb856ce747051158d19fd1ca2f7260aa8e45"} Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.291957 5010 scope.go:117] "RemoveContainer" containerID="3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.297761 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.332650 5010 scope.go:117] "RemoveContainer" containerID="856eada0db222e1896dcc1f7b3ea89a80e570a61b2928200b50eca62149213eb" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.364834 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sg4lc"] Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.374796 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sg4lc"] Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.379736 5010 scope.go:117] "RemoveContainer" containerID="a872397b7968be8c4ffd262a8deea4f4c66a360b3a087a92e88a40e32c031cf4" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.433930 5010 scope.go:117] "RemoveContainer" containerID="3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee" Feb 03 10:28:48 crc kubenswrapper[5010]: E0203 10:28:48.434764 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee\": container with ID starting with 3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee not found: ID does not exist" containerID="3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.434834 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee"} err="failed to get container status \"3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee\": rpc error: code = NotFound desc = could not find container \"3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee\": container with ID starting with 3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee not found: ID does not exist" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.434875 5010 scope.go:117] "RemoveContainer" containerID="856eada0db222e1896dcc1f7b3ea89a80e570a61b2928200b50eca62149213eb" Feb 03 10:28:48 crc kubenswrapper[5010]: E0203 10:28:48.435635 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"856eada0db222e1896dcc1f7b3ea89a80e570a61b2928200b50eca62149213eb\": container with ID starting with 856eada0db222e1896dcc1f7b3ea89a80e570a61b2928200b50eca62149213eb not found: ID does not exist" containerID="856eada0db222e1896dcc1f7b3ea89a80e570a61b2928200b50eca62149213eb" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.435693 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"856eada0db222e1896dcc1f7b3ea89a80e570a61b2928200b50eca62149213eb"} err="failed to get container status \"856eada0db222e1896dcc1f7b3ea89a80e570a61b2928200b50eca62149213eb\": rpc error: code = NotFound desc = could not find container \"856eada0db222e1896dcc1f7b3ea89a80e570a61b2928200b50eca62149213eb\": container with ID starting with 856eada0db222e1896dcc1f7b3ea89a80e570a61b2928200b50eca62149213eb not found: ID does not exist" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.435735 5010 scope.go:117] "RemoveContainer" containerID="a872397b7968be8c4ffd262a8deea4f4c66a360b3a087a92e88a40e32c031cf4" Feb 03 10:28:48 crc kubenswrapper[5010]: E0203 10:28:48.436415 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a872397b7968be8c4ffd262a8deea4f4c66a360b3a087a92e88a40e32c031cf4\": container with ID starting with a872397b7968be8c4ffd262a8deea4f4c66a360b3a087a92e88a40e32c031cf4 not found: ID does not exist" containerID="a872397b7968be8c4ffd262a8deea4f4c66a360b3a087a92e88a40e32c031cf4" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.436489 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a872397b7968be8c4ffd262a8deea4f4c66a360b3a087a92e88a40e32c031cf4"} err="failed to get container status \"a872397b7968be8c4ffd262a8deea4f4c66a360b3a087a92e88a40e32c031cf4\": rpc error: code = NotFound desc = could not find container \"a872397b7968be8c4ffd262a8deea4f4c66a360b3a087a92e88a40e32c031cf4\": container with ID starting with a872397b7968be8c4ffd262a8deea4f4c66a360b3a087a92e88a40e32c031cf4 not found: ID does not exist" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.516887 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5185b2c5-d115-4546-afcf-bc17a00a6cda" path="/var/lib/kubelet/pods/5185b2c5-d115-4546-afcf-bc17a00a6cda/volumes" Feb 03 10:28:52 crc kubenswrapper[5010]: I0203 10:28:52.313148 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 03 10:28:56 crc kubenswrapper[5010]: I0203 10:28:56.502088 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:28:56 crc kubenswrapper[5010]: E0203 10:28:56.502875 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:29:02 crc kubenswrapper[5010]: I0203 10:29:02.626430 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 10:29:03 crc kubenswrapper[5010]: I0203 10:29:03.629645 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 10:29:07 crc kubenswrapper[5010]: I0203 10:29:07.061679 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="2ce83ed2-cbef-4045-8822-6f58268b28b3" containerName="rabbitmq" containerID="cri-o://602c03e894fa88a9b33161b23751551ae10019029e054f5933d29cf4949f0620" gracePeriod=604796 Feb 03 10:29:07 crc kubenswrapper[5010]: I0203 10:29:07.907264 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" containerName="rabbitmq" containerID="cri-o://e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89" gracePeriod=604796 Feb 03 10:29:08 crc kubenswrapper[5010]: I0203 10:29:08.036972 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2ce83ed2-cbef-4045-8822-6f58268b28b3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.96:5671: connect: connection refused" Feb 03 10:29:08 crc kubenswrapper[5010]: I0203 10:29:08.503071 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:29:08 crc kubenswrapper[5010]: E0203 10:29:08.503530 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:29:08 crc kubenswrapper[5010]: I0203 10:29:08.617730 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.576954 5010 generic.go:334] "Generic (PLEG): container finished" podID="2ce83ed2-cbef-4045-8822-6f58268b28b3" containerID="602c03e894fa88a9b33161b23751551ae10019029e054f5933d29cf4949f0620" exitCode=0 Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.577161 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ce83ed2-cbef-4045-8822-6f58268b28b3","Type":"ContainerDied","Data":"602c03e894fa88a9b33161b23751551ae10019029e054f5933d29cf4949f0620"} Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.832874 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.851688 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ce83ed2-cbef-4045-8822-6f58268b28b3-pod-info\") pod \"2ce83ed2-cbef-4045-8822-6f58268b28b3\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.851749 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-tls\") pod \"2ce83ed2-cbef-4045-8822-6f58268b28b3\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.851781 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-config-data\") pod \"2ce83ed2-cbef-4045-8822-6f58268b28b3\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.851811 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-confd\") pod \"2ce83ed2-cbef-4045-8822-6f58268b28b3\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.851852 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-plugins\") pod \"2ce83ed2-cbef-4045-8822-6f58268b28b3\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.851934 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"2ce83ed2-cbef-4045-8822-6f58268b28b3\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.851986 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5rwd\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-kube-api-access-m5rwd\") pod \"2ce83ed2-cbef-4045-8822-6f58268b28b3\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.852011 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-plugins-conf\") pod \"2ce83ed2-cbef-4045-8822-6f58268b28b3\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.852034 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-erlang-cookie\") pod \"2ce83ed2-cbef-4045-8822-6f58268b28b3\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.852123 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-server-conf\") pod \"2ce83ed2-cbef-4045-8822-6f58268b28b3\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.852162 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ce83ed2-cbef-4045-8822-6f58268b28b3-erlang-cookie-secret\") pod \"2ce83ed2-cbef-4045-8822-6f58268b28b3\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.853484 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2ce83ed2-cbef-4045-8822-6f58268b28b3" (UID: "2ce83ed2-cbef-4045-8822-6f58268b28b3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.854325 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2ce83ed2-cbef-4045-8822-6f58268b28b3" (UID: "2ce83ed2-cbef-4045-8822-6f58268b28b3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.854361 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2ce83ed2-cbef-4045-8822-6f58268b28b3" (UID: "2ce83ed2-cbef-4045-8822-6f58268b28b3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.858701 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "2ce83ed2-cbef-4045-8822-6f58268b28b3" (UID: "2ce83ed2-cbef-4045-8822-6f58268b28b3"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.858759 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce83ed2-cbef-4045-8822-6f58268b28b3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2ce83ed2-cbef-4045-8822-6f58268b28b3" (UID: "2ce83ed2-cbef-4045-8822-6f58268b28b3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.859414 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-kube-api-access-m5rwd" (OuterVolumeSpecName: "kube-api-access-m5rwd") pod "2ce83ed2-cbef-4045-8822-6f58268b28b3" (UID: "2ce83ed2-cbef-4045-8822-6f58268b28b3"). InnerVolumeSpecName "kube-api-access-m5rwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.865892 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2ce83ed2-cbef-4045-8822-6f58268b28b3-pod-info" (OuterVolumeSpecName: "pod-info") pod "2ce83ed2-cbef-4045-8822-6f58268b28b3" (UID: "2ce83ed2-cbef-4045-8822-6f58268b28b3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.882136 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2ce83ed2-cbef-4045-8822-6f58268b28b3" (UID: "2ce83ed2-cbef-4045-8822-6f58268b28b3"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.909711 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-config-data" (OuterVolumeSpecName: "config-data") pod "2ce83ed2-cbef-4045-8822-6f58268b28b3" (UID: "2ce83ed2-cbef-4045-8822-6f58268b28b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.952255 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-server-conf" (OuterVolumeSpecName: "server-conf") pod "2ce83ed2-cbef-4045-8822-6f58268b28b3" (UID: "2ce83ed2-cbef-4045-8822-6f58268b28b3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.955692 5010 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.955811 5010 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-server-conf\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.955914 5010 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ce83ed2-cbef-4045-8822-6f58268b28b3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.956020 5010 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ce83ed2-cbef-4045-8822-6f58268b28b3-pod-info\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.956171 5010 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.956247 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.956322 5010 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.956406 5010 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.956482 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5rwd\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-kube-api-access-m5rwd\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.956556 5010 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.994632 5010 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.010926 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2ce83ed2-cbef-4045-8822-6f58268b28b3" (UID: "2ce83ed2-cbef-4045-8822-6f58268b28b3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.061715 5010 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.061754 5010 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.529944 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.581012 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-pod-info\") pod \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.581107 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkwkl\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-kube-api-access-qkwkl\") pod \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.581168 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-config-data\") pod \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.581202 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-tls\") pod \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.582405 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-plugins\") pod \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.582440 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-erlang-cookie\") pod \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.582473 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-server-conf\") pod \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.582507 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-confd\") pod \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.582590 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-erlang-cookie-secret\") pod \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.582666 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-plugins-conf\") pod \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.582692 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.584999 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f2066c8b-8b89-4dcb-972d-aea4dcd1c105" (UID: "f2066c8b-8b89-4dcb-972d-aea4dcd1c105"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.587073 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f2066c8b-8b89-4dcb-972d-aea4dcd1c105" (UID: "f2066c8b-8b89-4dcb-972d-aea4dcd1c105"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.592131 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "f2066c8b-8b89-4dcb-972d-aea4dcd1c105" (UID: "f2066c8b-8b89-4dcb-972d-aea4dcd1c105"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.593332 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f2066c8b-8b89-4dcb-972d-aea4dcd1c105" (UID: "f2066c8b-8b89-4dcb-972d-aea4dcd1c105"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.598243 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f2066c8b-8b89-4dcb-972d-aea4dcd1c105" (UID: "f2066c8b-8b89-4dcb-972d-aea4dcd1c105"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.598581 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-kube-api-access-qkwkl" (OuterVolumeSpecName: "kube-api-access-qkwkl") pod "f2066c8b-8b89-4dcb-972d-aea4dcd1c105" (UID: "f2066c8b-8b89-4dcb-972d-aea4dcd1c105"). InnerVolumeSpecName "kube-api-access-qkwkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.598881 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-pod-info" (OuterVolumeSpecName: "pod-info") pod "f2066c8b-8b89-4dcb-972d-aea4dcd1c105" (UID: "f2066c8b-8b89-4dcb-972d-aea4dcd1c105"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.600407 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f2066c8b-8b89-4dcb-972d-aea4dcd1c105" (UID: "f2066c8b-8b89-4dcb-972d-aea4dcd1c105"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.617750 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ce83ed2-cbef-4045-8822-6f58268b28b3","Type":"ContainerDied","Data":"97cdcebe285a4f7a484868c96029b1b0d97151d7f63016f73836ed870ad4197d"} Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.617821 5010 scope.go:117] "RemoveContainer" containerID="602c03e894fa88a9b33161b23751551ae10019029e054f5933d29cf4949f0620" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.617994 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.634502 5010 generic.go:334] "Generic (PLEG): container finished" podID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" containerID="e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89" exitCode=0 Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.634555 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f2066c8b-8b89-4dcb-972d-aea4dcd1c105","Type":"ContainerDied","Data":"e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89"} Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.634586 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f2066c8b-8b89-4dcb-972d-aea4dcd1c105","Type":"ContainerDied","Data":"6f662c0876b2bb6a1a91c65ab1f7cf8a34f9b5b27a5996afb9426d7a8621423b"} Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.634661 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.640023 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-config-data" (OuterVolumeSpecName: "config-data") pod "f2066c8b-8b89-4dcb-972d-aea4dcd1c105" (UID: "f2066c8b-8b89-4dcb-972d-aea4dcd1c105"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.685180 5010 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.685259 5010 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.685274 5010 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.685285 5010 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.685307 5010 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.685317 5010 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-pod-info\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.685327 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkwkl\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-kube-api-access-qkwkl\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.685335 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.685344 5010 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.707975 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-server-conf" (OuterVolumeSpecName: "server-conf") pod "f2066c8b-8b89-4dcb-972d-aea4dcd1c105" (UID: "f2066c8b-8b89-4dcb-972d-aea4dcd1c105"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.717691 5010 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.736407 5010 scope.go:117] "RemoveContainer" containerID="10e7a7e1923769d25869f1642046743d27038f14081a9edd79e0d2a9d1c7d095" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.747795 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.765012 5010 scope.go:117] "RemoveContainer" containerID="e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.765033 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f2066c8b-8b89-4dcb-972d-aea4dcd1c105" (UID: "f2066c8b-8b89-4dcb-972d-aea4dcd1c105"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.777037 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.787230 5010 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.787267 5010 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-server-conf\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.787279 5010 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.803754 5010 scope.go:117] "RemoveContainer" containerID="35eaa2b360c11ef3168d683fc2f67400b01f08b1d9f58aea46291a308a02faae" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.812419 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 10:29:14 crc kubenswrapper[5010]: E0203 10:29:14.812863 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" containerName="rabbitmq" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.812879 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" containerName="rabbitmq" Feb 03 10:29:14 crc kubenswrapper[5010]: E0203 10:29:14.812890 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5185b2c5-d115-4546-afcf-bc17a00a6cda" containerName="extract-utilities" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.812897 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="5185b2c5-d115-4546-afcf-bc17a00a6cda" containerName="extract-utilities" Feb 03 10:29:14 crc kubenswrapper[5010]: E0203 10:29:14.812915 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5185b2c5-d115-4546-afcf-bc17a00a6cda" containerName="registry-server" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.812921 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="5185b2c5-d115-4546-afcf-bc17a00a6cda" containerName="registry-server" Feb 03 10:29:14 crc kubenswrapper[5010]: E0203 10:29:14.812941 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ce83ed2-cbef-4045-8822-6f58268b28b3" containerName="setup-container" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.812947 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ce83ed2-cbef-4045-8822-6f58268b28b3" containerName="setup-container" Feb 03 10:29:14 crc kubenswrapper[5010]: E0203 10:29:14.812955 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" containerName="setup-container" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.812961 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" containerName="setup-container" Feb 03 10:29:14 crc kubenswrapper[5010]: E0203 10:29:14.812981 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ce83ed2-cbef-4045-8822-6f58268b28b3" containerName="rabbitmq" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.812986 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ce83ed2-cbef-4045-8822-6f58268b28b3" containerName="rabbitmq" Feb 03 10:29:14 crc kubenswrapper[5010]: E0203 10:29:14.813006 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5185b2c5-d115-4546-afcf-bc17a00a6cda" containerName="extract-content" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.813012 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="5185b2c5-d115-4546-afcf-bc17a00a6cda" containerName="extract-content" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.813194 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ce83ed2-cbef-4045-8822-6f58268b28b3" containerName="rabbitmq" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.813234 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="5185b2c5-d115-4546-afcf-bc17a00a6cda" containerName="registry-server" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.813246 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" containerName="rabbitmq" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.814836 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.817964 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.827538 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.827739 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.827837 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.827918 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.828047 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-9nfm9" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.828532 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.841328 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.858804 5010 scope.go:117] "RemoveContainer" containerID="e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89" Feb 03 10:29:14 crc kubenswrapper[5010]: E0203 10:29:14.859264 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89\": container with ID starting with e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89 not found: ID does not exist" containerID="e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.859309 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89"} err="failed to get container status \"e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89\": rpc error: code = NotFound desc = could not find container \"e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89\": container with ID starting with e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89 not found: ID does not exist" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.859334 5010 scope.go:117] "RemoveContainer" containerID="35eaa2b360c11ef3168d683fc2f67400b01f08b1d9f58aea46291a308a02faae" Feb 03 10:29:14 crc kubenswrapper[5010]: E0203 10:29:14.859599 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35eaa2b360c11ef3168d683fc2f67400b01f08b1d9f58aea46291a308a02faae\": container with ID starting with 35eaa2b360c11ef3168d683fc2f67400b01f08b1d9f58aea46291a308a02faae not found: ID does not exist" containerID="35eaa2b360c11ef3168d683fc2f67400b01f08b1d9f58aea46291a308a02faae" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.859628 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35eaa2b360c11ef3168d683fc2f67400b01f08b1d9f58aea46291a308a02faae"} err="failed to get container status \"35eaa2b360c11ef3168d683fc2f67400b01f08b1d9f58aea46291a308a02faae\": rpc error: code = NotFound desc = could not find container \"35eaa2b360c11ef3168d683fc2f67400b01f08b1d9f58aea46291a308a02faae\": container with ID starting with 35eaa2b360c11ef3168d683fc2f67400b01f08b1d9f58aea46291a308a02faae not found: ID does not exist" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.888944 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/543f315d-d2f8-497f-a2c1-1a929c1611be-server-conf\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.889003 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/543f315d-d2f8-497f-a2c1-1a929c1611be-pod-info\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.889030 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.889062 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfn2t\" (UniqueName: \"kubernetes.io/projected/543f315d-d2f8-497f-a2c1-1a929c1611be-kube-api-access-nfn2t\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.889334 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/543f315d-d2f8-497f-a2c1-1a929c1611be-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.889457 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/543f315d-d2f8-497f-a2c1-1a929c1611be-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.889587 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/543f315d-d2f8-497f-a2c1-1a929c1611be-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.889699 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/543f315d-d2f8-497f-a2c1-1a929c1611be-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.889729 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/543f315d-d2f8-497f-a2c1-1a929c1611be-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.889778 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/543f315d-d2f8-497f-a2c1-1a929c1611be-config-data\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.889919 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/543f315d-d2f8-497f-a2c1-1a929c1611be-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.974738 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.997793 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.998165 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/543f315d-d2f8-497f-a2c1-1a929c1611be-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.998289 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/543f315d-d2f8-497f-a2c1-1a929c1611be-server-conf\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.998332 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/543f315d-d2f8-497f-a2c1-1a929c1611be-pod-info\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.998383 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.998474 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfn2t\" (UniqueName: \"kubernetes.io/projected/543f315d-d2f8-497f-a2c1-1a929c1611be-kube-api-access-nfn2t\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.998602 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.998752 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/543f315d-d2f8-497f-a2c1-1a929c1611be-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.998848 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/543f315d-d2f8-497f-a2c1-1a929c1611be-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.999003 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/543f315d-d2f8-497f-a2c1-1a929c1611be-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.999140 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/543f315d-d2f8-497f-a2c1-1a929c1611be-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.999171 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/543f315d-d2f8-497f-a2c1-1a929c1611be-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.999222 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/543f315d-d2f8-497f-a2c1-1a929c1611be-config-data\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.999740 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/543f315d-d2f8-497f-a2c1-1a929c1611be-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.000069 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/543f315d-d2f8-497f-a2c1-1a929c1611be-config-data\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.000197 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/543f315d-d2f8-497f-a2c1-1a929c1611be-server-conf\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.000479 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/543f315d-d2f8-497f-a2c1-1a929c1611be-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.001037 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/543f315d-d2f8-497f-a2c1-1a929c1611be-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.007359 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/543f315d-d2f8-497f-a2c1-1a929c1611be-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.007359 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/543f315d-d2f8-497f-a2c1-1a929c1611be-pod-info\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.013145 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/543f315d-d2f8-497f-a2c1-1a929c1611be-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.013389 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/543f315d-d2f8-497f-a2c1-1a929c1611be-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.019406 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.022691 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.025960 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.026121 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.026263 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-ld7g9" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.027589 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.027860 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.028022 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.028204 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.033911 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.056285 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.056952 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfn2t\" (UniqueName: \"kubernetes.io/projected/543f315d-d2f8-497f-a2c1-1a929c1611be-kube-api-access-nfn2t\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.101740 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.101945 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.102245 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.102344 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.102547 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.102654 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.102921 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.103017 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.103099 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.103170 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.103244 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf265\" (UniqueName: \"kubernetes.io/projected/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-kube-api-access-pf265\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.134857 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.239781 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.239955 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.240015 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.240141 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.240287 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.240503 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.240576 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.240643 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.240698 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.240772 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf265\" (UniqueName: \"kubernetes.io/projected/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-kube-api-access-pf265\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.240878 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.245739 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.245957 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.246356 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.246631 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.246831 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.247854 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.263950 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.266057 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.267035 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.274957 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.282726 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf265\" (UniqueName: \"kubernetes.io/projected/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-kube-api-access-pf265\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.324457 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.347172 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.768807 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.920020 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 10:29:15 crc kubenswrapper[5010]: W0203 10:29:15.925662 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9044f36b_9c2b_47bf_b1a3_46c14c6ec5cf.slice/crio-6181ea4de4a405350e47624cb8c31335ee5fc8611261f4795045fa244338c476 WatchSource:0}: Error finding container 6181ea4de4a405350e47624cb8c31335ee5fc8611261f4795045fa244338c476: Status 404 returned error can't find the container with id 6181ea4de4a405350e47624cb8c31335ee5fc8611261f4795045fa244338c476 Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.043923 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mjf7k"] Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.045513 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.050918 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.069752 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mjf7k"] Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.169480 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.169588 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-config\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.169663 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.169770 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.169964 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8q2d\" (UniqueName: \"kubernetes.io/projected/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-kube-api-access-z8q2d\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.170010 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.170043 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.272811 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8q2d\" (UniqueName: \"kubernetes.io/projected/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-kube-api-access-z8q2d\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.272897 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.272943 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.272989 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.273046 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-config\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.273097 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.273156 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.274645 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.274664 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.274790 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.275179 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.275208 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.275432 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-config\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.293730 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8q2d\" (UniqueName: \"kubernetes.io/projected/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-kube-api-access-z8q2d\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.373266 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.558314 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ce83ed2-cbef-4045-8822-6f58268b28b3" path="/var/lib/kubelet/pods/2ce83ed2-cbef-4045-8822-6f58268b28b3/volumes" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.559613 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" path="/var/lib/kubelet/pods/f2066c8b-8b89-4dcb-972d-aea4dcd1c105/volumes" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.667626 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"543f315d-d2f8-497f-a2c1-1a929c1611be","Type":"ContainerStarted","Data":"fcef8f75e389407c1f346ac05d9ab158ea83bf4db6071355624db725d02f0e9c"} Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.669297 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf","Type":"ContainerStarted","Data":"6181ea4de4a405350e47624cb8c31335ee5fc8611261f4795045fa244338c476"} Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.932184 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mjf7k"] Feb 03 10:29:17 crc kubenswrapper[5010]: I0203 10:29:17.685499 5010 generic.go:334] "Generic (PLEG): container finished" podID="d1f7d409-fa49-4bd1-a07b-0c349e72b21c" containerID="56d169c276fa4095404764411251a3851d82d66b94873e66867ac3bc5321f85d" exitCode=0 Feb 03 10:29:17 crc kubenswrapper[5010]: I0203 10:29:17.685890 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" event={"ID":"d1f7d409-fa49-4bd1-a07b-0c349e72b21c","Type":"ContainerDied","Data":"56d169c276fa4095404764411251a3851d82d66b94873e66867ac3bc5321f85d"} Feb 03 10:29:17 crc kubenswrapper[5010]: I0203 10:29:17.686777 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" event={"ID":"d1f7d409-fa49-4bd1-a07b-0c349e72b21c","Type":"ContainerStarted","Data":"b90baa1d4d9f0ddbc89dd4b10b55aff56a1978d65ba73c5d42c76702253705b7"} Feb 03 10:29:18 crc kubenswrapper[5010]: I0203 10:29:18.696882 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"543f315d-d2f8-497f-a2c1-1a929c1611be","Type":"ContainerStarted","Data":"19fb7b1a68b1ff52895088d592e7289b1fff4b1eeeb28c2089dc4b6320456f19"} Feb 03 10:29:18 crc kubenswrapper[5010]: I0203 10:29:18.702083 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf","Type":"ContainerStarted","Data":"09c52085ec4e3b7039b34527eb3963f0af7d7da40200e027a5bee0de0a333736"} Feb 03 10:29:18 crc kubenswrapper[5010]: I0203 10:29:18.705551 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" event={"ID":"d1f7d409-fa49-4bd1-a07b-0c349e72b21c","Type":"ContainerStarted","Data":"e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2"} Feb 03 10:29:18 crc kubenswrapper[5010]: I0203 10:29:18.705703 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:18 crc kubenswrapper[5010]: I0203 10:29:18.746543 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" podStartSLOduration=2.746527847 podStartE2EDuration="2.746527847s" podCreationTimestamp="2026-02-03 10:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:29:18.744311021 +0000 UTC m=+1628.900287170" watchObservedRunningTime="2026-02-03 10:29:18.746527847 +0000 UTC m=+1628.902503976" Feb 03 10:29:21 crc kubenswrapper[5010]: I0203 10:29:21.502919 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:29:21 crc kubenswrapper[5010]: E0203 10:29:21.503824 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.375353 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.465381 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-5t6hf"] Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.466426 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" podUID="112eb3e9-cf11-4513-be2d-53a42670413e" containerName="dnsmasq-dns" containerID="cri-o://e50968d30732ac2c762348838c8f14a711f5720b5d244d0a09fd6ce7ae975514" gracePeriod=10 Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.683086 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-845df"] Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.687369 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.698364 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-845df"] Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.729828 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn4bn\" (UniqueName: \"kubernetes.io/projected/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-kube-api-access-gn4bn\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.730247 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-config\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.730358 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.730485 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.730602 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.730814 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-dns-svc\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.730994 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.790345 5010 generic.go:334] "Generic (PLEG): container finished" podID="112eb3e9-cf11-4513-be2d-53a42670413e" containerID="e50968d30732ac2c762348838c8f14a711f5720b5d244d0a09fd6ce7ae975514" exitCode=0 Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.790389 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" event={"ID":"112eb3e9-cf11-4513-be2d-53a42670413e","Type":"ContainerDied","Data":"e50968d30732ac2c762348838c8f14a711f5720b5d244d0a09fd6ce7ae975514"} Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.832460 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-config\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.832523 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.832574 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.832618 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.832734 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-dns-svc\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.832807 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.832841 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn4bn\" (UniqueName: \"kubernetes.io/projected/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-kube-api-access-gn4bn\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.833731 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-config\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.834635 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.835115 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.835123 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-dns-svc\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.835376 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.837378 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.853890 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn4bn\" (UniqueName: \"kubernetes.io/projected/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-kube-api-access-gn4bn\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.014310 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.165749 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.260761 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-ovsdbserver-nb\") pod \"112eb3e9-cf11-4513-be2d-53a42670413e\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.260805 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-ovsdbserver-sb\") pod \"112eb3e9-cf11-4513-be2d-53a42670413e\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.260894 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-config\") pod \"112eb3e9-cf11-4513-be2d-53a42670413e\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.260944 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-dns-svc\") pod \"112eb3e9-cf11-4513-be2d-53a42670413e\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.261039 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-dns-swift-storage-0\") pod \"112eb3e9-cf11-4513-be2d-53a42670413e\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.261069 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm9pt\" (UniqueName: \"kubernetes.io/projected/112eb3e9-cf11-4513-be2d-53a42670413e-kube-api-access-pm9pt\") pod \"112eb3e9-cf11-4513-be2d-53a42670413e\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.271423 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/112eb3e9-cf11-4513-be2d-53a42670413e-kube-api-access-pm9pt" (OuterVolumeSpecName: "kube-api-access-pm9pt") pod "112eb3e9-cf11-4513-be2d-53a42670413e" (UID: "112eb3e9-cf11-4513-be2d-53a42670413e"). InnerVolumeSpecName "kube-api-access-pm9pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.325210 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "112eb3e9-cf11-4513-be2d-53a42670413e" (UID: "112eb3e9-cf11-4513-be2d-53a42670413e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.328136 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "112eb3e9-cf11-4513-be2d-53a42670413e" (UID: "112eb3e9-cf11-4513-be2d-53a42670413e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.329975 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "112eb3e9-cf11-4513-be2d-53a42670413e" (UID: "112eb3e9-cf11-4513-be2d-53a42670413e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.334574 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-config" (OuterVolumeSpecName: "config") pod "112eb3e9-cf11-4513-be2d-53a42670413e" (UID: "112eb3e9-cf11-4513-be2d-53a42670413e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.342917 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "112eb3e9-cf11-4513-be2d-53a42670413e" (UID: "112eb3e9-cf11-4513-be2d-53a42670413e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.363485 5010 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.363543 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm9pt\" (UniqueName: \"kubernetes.io/projected/112eb3e9-cf11-4513-be2d-53a42670413e-kube-api-access-pm9pt\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.363569 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.363584 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.363597 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.363607 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.572267 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-845df"] Feb 03 10:29:27 crc kubenswrapper[5010]: W0203 10:29:27.576134 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d935acc_a244_4c1f_a9f8_9924fa8b61f1.slice/crio-c0d0ee1a3dd0f8d1ec602e0dd75b1cdb018a087f84d0cc15e397b26c541c7dd3 WatchSource:0}: Error finding container c0d0ee1a3dd0f8d1ec602e0dd75b1cdb018a087f84d0cc15e397b26c541c7dd3: Status 404 returned error can't find the container with id c0d0ee1a3dd0f8d1ec602e0dd75b1cdb018a087f84d0cc15e397b26c541c7dd3 Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.802394 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-845df" event={"ID":"3d935acc-a244-4c1f-a9f8-9924fa8b61f1","Type":"ContainerStarted","Data":"c0d0ee1a3dd0f8d1ec602e0dd75b1cdb018a087f84d0cc15e397b26c541c7dd3"} Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.805035 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" event={"ID":"112eb3e9-cf11-4513-be2d-53a42670413e","Type":"ContainerDied","Data":"9696bbc5c05e1ee911f02b7758d1162dc7d17512676a3ce246b9266d4a35accd"} Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.805075 5010 scope.go:117] "RemoveContainer" containerID="e50968d30732ac2c762348838c8f14a711f5720b5d244d0a09fd6ce7ae975514" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.805202 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.835375 5010 scope.go:117] "RemoveContainer" containerID="84b72c9b54d05dcdbccb71e2a8f9d59046f32de5c34fe094370a4de1492b0639" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.848945 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-5t6hf"] Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.860078 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-5t6hf"] Feb 03 10:29:28 crc kubenswrapper[5010]: I0203 10:29:28.517018 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="112eb3e9-cf11-4513-be2d-53a42670413e" path="/var/lib/kubelet/pods/112eb3e9-cf11-4513-be2d-53a42670413e/volumes" Feb 03 10:29:28 crc kubenswrapper[5010]: I0203 10:29:28.815506 5010 generic.go:334] "Generic (PLEG): container finished" podID="3d935acc-a244-4c1f-a9f8-9924fa8b61f1" containerID="52b75dc93253253ed5c3a050029beed8bfde18a85d4c17d4fcd8b1f6f28c4e39" exitCode=0 Feb 03 10:29:28 crc kubenswrapper[5010]: I0203 10:29:28.815586 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-845df" event={"ID":"3d935acc-a244-4c1f-a9f8-9924fa8b61f1","Type":"ContainerDied","Data":"52b75dc93253253ed5c3a050029beed8bfde18a85d4c17d4fcd8b1f6f28c4e39"} Feb 03 10:29:29 crc kubenswrapper[5010]: I0203 10:29:29.830534 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-845df" event={"ID":"3d935acc-a244-4c1f-a9f8-9924fa8b61f1","Type":"ContainerStarted","Data":"1aedaeb7d50a68d6d9432c3805aea359909c960c180d48e1a2adcc84f7707c3f"} Feb 03 10:29:29 crc kubenswrapper[5010]: I0203 10:29:29.830986 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:29 crc kubenswrapper[5010]: I0203 10:29:29.855888 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-845df" podStartSLOduration=3.855839686 podStartE2EDuration="3.855839686s" podCreationTimestamp="2026-02-03 10:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:29:29.853414484 +0000 UTC m=+1640.009390623" watchObservedRunningTime="2026-02-03 10:29:29.855839686 +0000 UTC m=+1640.011815825" Feb 03 10:29:36 crc kubenswrapper[5010]: I0203 10:29:36.502792 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:29:36 crc kubenswrapper[5010]: E0203 10:29:36.505022 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.016404 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.084646 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mjf7k"] Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.085003 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" podUID="d1f7d409-fa49-4bd1-a07b-0c349e72b21c" containerName="dnsmasq-dns" containerID="cri-o://e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2" gracePeriod=10 Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.572842 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.688192 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-dns-svc\") pod \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.688268 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-config\") pod \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.688420 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-ovsdbserver-sb\") pod \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.688438 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-openstack-edpm-ipam\") pod \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.688495 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8q2d\" (UniqueName: \"kubernetes.io/projected/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-kube-api-access-z8q2d\") pod \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.688518 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-ovsdbserver-nb\") pod \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.688537 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-dns-swift-storage-0\") pod \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.708780 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-kube-api-access-z8q2d" (OuterVolumeSpecName: "kube-api-access-z8q2d") pod "d1f7d409-fa49-4bd1-a07b-0c349e72b21c" (UID: "d1f7d409-fa49-4bd1-a07b-0c349e72b21c"). InnerVolumeSpecName "kube-api-access-z8q2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.742511 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d1f7d409-fa49-4bd1-a07b-0c349e72b21c" (UID: "d1f7d409-fa49-4bd1-a07b-0c349e72b21c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.746744 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d1f7d409-fa49-4bd1-a07b-0c349e72b21c" (UID: "d1f7d409-fa49-4bd1-a07b-0c349e72b21c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.755249 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "d1f7d409-fa49-4bd1-a07b-0c349e72b21c" (UID: "d1f7d409-fa49-4bd1-a07b-0c349e72b21c"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.757171 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d1f7d409-fa49-4bd1-a07b-0c349e72b21c" (UID: "d1f7d409-fa49-4bd1-a07b-0c349e72b21c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.764896 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-config" (OuterVolumeSpecName: "config") pod "d1f7d409-fa49-4bd1-a07b-0c349e72b21c" (UID: "d1f7d409-fa49-4bd1-a07b-0c349e72b21c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.765611 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d1f7d409-fa49-4bd1-a07b-0c349e72b21c" (UID: "d1f7d409-fa49-4bd1-a07b-0c349e72b21c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.790770 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.790806 5010 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.790816 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8q2d\" (UniqueName: \"kubernetes.io/projected/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-kube-api-access-z8q2d\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.790830 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.790839 5010 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.790848 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.790859 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.900220 5010 generic.go:334] "Generic (PLEG): container finished" podID="d1f7d409-fa49-4bd1-a07b-0c349e72b21c" containerID="e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2" exitCode=0 Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.900288 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" event={"ID":"d1f7d409-fa49-4bd1-a07b-0c349e72b21c","Type":"ContainerDied","Data":"e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2"} Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.900320 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" event={"ID":"d1f7d409-fa49-4bd1-a07b-0c349e72b21c","Type":"ContainerDied","Data":"b90baa1d4d9f0ddbc89dd4b10b55aff56a1978d65ba73c5d42c76702253705b7"} Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.900348 5010 scope.go:117] "RemoveContainer" containerID="e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.900495 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.940145 5010 scope.go:117] "RemoveContainer" containerID="56d169c276fa4095404764411251a3851d82d66b94873e66867ac3bc5321f85d" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.977441 5010 scope.go:117] "RemoveContainer" containerID="e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2" Feb 03 10:29:37 crc kubenswrapper[5010]: E0203 10:29:37.978645 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2\": container with ID starting with e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2 not found: ID does not exist" containerID="e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.978699 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2"} err="failed to get container status \"e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2\": rpc error: code = NotFound desc = could not find container \"e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2\": container with ID starting with e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2 not found: ID does not exist" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.978730 5010 scope.go:117] "RemoveContainer" containerID="56d169c276fa4095404764411251a3851d82d66b94873e66867ac3bc5321f85d" Feb 03 10:29:37 crc kubenswrapper[5010]: E0203 10:29:37.979771 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56d169c276fa4095404764411251a3851d82d66b94873e66867ac3bc5321f85d\": container with ID starting with 56d169c276fa4095404764411251a3851d82d66b94873e66867ac3bc5321f85d not found: ID does not exist" containerID="56d169c276fa4095404764411251a3851d82d66b94873e66867ac3bc5321f85d" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.979849 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56d169c276fa4095404764411251a3851d82d66b94873e66867ac3bc5321f85d"} err="failed to get container status \"56d169c276fa4095404764411251a3851d82d66b94873e66867ac3bc5321f85d\": rpc error: code = NotFound desc = could not find container \"56d169c276fa4095404764411251a3851d82d66b94873e66867ac3bc5321f85d\": container with ID starting with 56d169c276fa4095404764411251a3851d82d66b94873e66867ac3bc5321f85d not found: ID does not exist" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.985737 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mjf7k"] Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.995874 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mjf7k"] Feb 03 10:29:38 crc kubenswrapper[5010]: I0203 10:29:38.513456 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1f7d409-fa49-4bd1-a07b-0c349e72b21c" path="/var/lib/kubelet/pods/d1f7d409-fa49-4bd1-a07b-0c349e72b21c/volumes" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.805467 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749"] Feb 03 10:29:45 crc kubenswrapper[5010]: E0203 10:29:45.808383 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f7d409-fa49-4bd1-a07b-0c349e72b21c" containerName="init" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.808413 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f7d409-fa49-4bd1-a07b-0c349e72b21c" containerName="init" Feb 03 10:29:45 crc kubenswrapper[5010]: E0203 10:29:45.808430 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f7d409-fa49-4bd1-a07b-0c349e72b21c" containerName="dnsmasq-dns" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.808438 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f7d409-fa49-4bd1-a07b-0c349e72b21c" containerName="dnsmasq-dns" Feb 03 10:29:45 crc kubenswrapper[5010]: E0203 10:29:45.808454 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112eb3e9-cf11-4513-be2d-53a42670413e" containerName="dnsmasq-dns" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.808462 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="112eb3e9-cf11-4513-be2d-53a42670413e" containerName="dnsmasq-dns" Feb 03 10:29:45 crc kubenswrapper[5010]: E0203 10:29:45.808496 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112eb3e9-cf11-4513-be2d-53a42670413e" containerName="init" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.808503 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="112eb3e9-cf11-4513-be2d-53a42670413e" containerName="init" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.808735 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="112eb3e9-cf11-4513-be2d-53a42670413e" containerName="dnsmasq-dns" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.808767 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1f7d409-fa49-4bd1-a07b-0c349e72b21c" containerName="dnsmasq-dns" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.809868 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.813070 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.814664 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.814980 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.816433 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.821307 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749"] Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.855710 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mg749\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.855777 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsh87\" (UniqueName: \"kubernetes.io/projected/43ecdc43-d866-4902-89cb-0ce68e89fe05-kube-api-access-rsh87\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mg749\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.855884 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mg749\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.855912 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mg749\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.956988 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mg749\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.957033 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsh87\" (UniqueName: \"kubernetes.io/projected/43ecdc43-d866-4902-89cb-0ce68e89fe05-kube-api-access-rsh87\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mg749\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.957097 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mg749\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.957123 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mg749\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.963165 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mg749\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.963296 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mg749\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.964333 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mg749\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.973505 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsh87\" (UniqueName: \"kubernetes.io/projected/43ecdc43-d866-4902-89cb-0ce68e89fe05-kube-api-access-rsh87\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mg749\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:46 crc kubenswrapper[5010]: I0203 10:29:46.134192 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:46 crc kubenswrapper[5010]: I0203 10:29:46.683960 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749"] Feb 03 10:29:46 crc kubenswrapper[5010]: I0203 10:29:46.991631 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" event={"ID":"43ecdc43-d866-4902-89cb-0ce68e89fe05","Type":"ContainerStarted","Data":"77fbac41963512257d1526ae37ef85f2001ddf70c4b35586b4cb448e373c633b"} Feb 03 10:29:47 crc kubenswrapper[5010]: I0203 10:29:47.502446 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:29:47 crc kubenswrapper[5010]: E0203 10:29:47.503131 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:29:50 crc kubenswrapper[5010]: I0203 10:29:50.022423 5010 generic.go:334] "Generic (PLEG): container finished" podID="543f315d-d2f8-497f-a2c1-1a929c1611be" containerID="19fb7b1a68b1ff52895088d592e7289b1fff4b1eeeb28c2089dc4b6320456f19" exitCode=0 Feb 03 10:29:50 crc kubenswrapper[5010]: I0203 10:29:50.022940 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"543f315d-d2f8-497f-a2c1-1a929c1611be","Type":"ContainerDied","Data":"19fb7b1a68b1ff52895088d592e7289b1fff4b1eeeb28c2089dc4b6320456f19"} Feb 03 10:29:51 crc kubenswrapper[5010]: I0203 10:29:51.033551 5010 generic.go:334] "Generic (PLEG): container finished" podID="9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf" containerID="09c52085ec4e3b7039b34527eb3963f0af7d7da40200e027a5bee0de0a333736" exitCode=0 Feb 03 10:29:51 crc kubenswrapper[5010]: I0203 10:29:51.033644 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf","Type":"ContainerDied","Data":"09c52085ec4e3b7039b34527eb3963f0af7d7da40200e027a5bee0de0a333736"} Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.144072 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb"] Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.146082 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.148967 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.149288 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.166470 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb"] Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.361873 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34e554f0-be79-4c9c-974d-f25941ae930e-secret-volume\") pod \"collect-profiles-29501910-7ksgb\" (UID: \"34e554f0-be79-4c9c-974d-f25941ae930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.361955 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34e554f0-be79-4c9c-974d-f25941ae930e-config-volume\") pod \"collect-profiles-29501910-7ksgb\" (UID: \"34e554f0-be79-4c9c-974d-f25941ae930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.362044 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8czt\" (UniqueName: \"kubernetes.io/projected/34e554f0-be79-4c9c-974d-f25941ae930e-kube-api-access-c8czt\") pod \"collect-profiles-29501910-7ksgb\" (UID: \"34e554f0-be79-4c9c-974d-f25941ae930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:00 crc kubenswrapper[5010]: E0203 10:30:00.396079 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Feb 03 10:30:00 crc kubenswrapper[5010]: E0203 10:30:00.396314 5010 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 03 10:30:00 crc kubenswrapper[5010]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Feb 03 10:30:00 crc kubenswrapper[5010]: - hosts: all Feb 03 10:30:00 crc kubenswrapper[5010]: strategy: linear Feb 03 10:30:00 crc kubenswrapper[5010]: tasks: Feb 03 10:30:00 crc kubenswrapper[5010]: - name: Enable podified-repos Feb 03 10:30:00 crc kubenswrapper[5010]: become: true Feb 03 10:30:00 crc kubenswrapper[5010]: ansible.builtin.shell: | Feb 03 10:30:00 crc kubenswrapper[5010]: set -euxo pipefail Feb 03 10:30:00 crc kubenswrapper[5010]: pushd /var/tmp Feb 03 10:30:00 crc kubenswrapper[5010]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Feb 03 10:30:00 crc kubenswrapper[5010]: pushd repo-setup-main Feb 03 10:30:00 crc kubenswrapper[5010]: python3 -m venv ./venv Feb 03 10:30:00 crc kubenswrapper[5010]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Feb 03 10:30:00 crc kubenswrapper[5010]: ./venv/bin/repo-setup current-podified -b antelope Feb 03 10:30:00 crc kubenswrapper[5010]: popd Feb 03 10:30:00 crc kubenswrapper[5010]: rm -rf repo-setup-main Feb 03 10:30:00 crc kubenswrapper[5010]: Feb 03 10:30:00 crc kubenswrapper[5010]: Feb 03 10:30:00 crc kubenswrapper[5010]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Feb 03 10:30:00 crc kubenswrapper[5010]: edpm_override_hosts: openstack-edpm-ipam Feb 03 10:30:00 crc kubenswrapper[5010]: edpm_service_type: repo-setup Feb 03 10:30:00 crc kubenswrapper[5010]: Feb 03 10:30:00 crc kubenswrapper[5010]: Feb 03 10:30:00 crc kubenswrapper[5010]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rsh87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-mg749_openstack(43ecdc43-d866-4902-89cb-0ce68e89fe05): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Feb 03 10:30:00 crc kubenswrapper[5010]: > logger="UnhandledError" Feb 03 10:30:00 crc kubenswrapper[5010]: E0203 10:30:00.397419 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" podUID="43ecdc43-d866-4902-89cb-0ce68e89fe05" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.466927 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8czt\" (UniqueName: \"kubernetes.io/projected/34e554f0-be79-4c9c-974d-f25941ae930e-kube-api-access-c8czt\") pod \"collect-profiles-29501910-7ksgb\" (UID: \"34e554f0-be79-4c9c-974d-f25941ae930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.467655 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34e554f0-be79-4c9c-974d-f25941ae930e-secret-volume\") pod \"collect-profiles-29501910-7ksgb\" (UID: \"34e554f0-be79-4c9c-974d-f25941ae930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.467706 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34e554f0-be79-4c9c-974d-f25941ae930e-config-volume\") pod \"collect-profiles-29501910-7ksgb\" (UID: \"34e554f0-be79-4c9c-974d-f25941ae930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.468758 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34e554f0-be79-4c9c-974d-f25941ae930e-config-volume\") pod \"collect-profiles-29501910-7ksgb\" (UID: \"34e554f0-be79-4c9c-974d-f25941ae930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.472814 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34e554f0-be79-4c9c-974d-f25941ae930e-secret-volume\") pod \"collect-profiles-29501910-7ksgb\" (UID: \"34e554f0-be79-4c9c-974d-f25941ae930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.482485 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8czt\" (UniqueName: \"kubernetes.io/projected/34e554f0-be79-4c9c-974d-f25941ae930e-kube-api-access-c8czt\") pod \"collect-profiles-29501910-7ksgb\" (UID: \"34e554f0-be79-4c9c-974d-f25941ae930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.508636 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:30:00 crc kubenswrapper[5010]: E0203 10:30:00.509086 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.515426 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.987899 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb"] Feb 03 10:30:00 crc kubenswrapper[5010]: W0203 10:30:00.990511 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34e554f0_be79_4c9c_974d_f25941ae930e.slice/crio-a12583ebc18635cfe4abc59f20a5088499fc468fa5cbdc945925543afdc66fa1 WatchSource:0}: Error finding container a12583ebc18635cfe4abc59f20a5088499fc468fa5cbdc945925543afdc66fa1: Status 404 returned error can't find the container with id a12583ebc18635cfe4abc59f20a5088499fc468fa5cbdc945925543afdc66fa1 Feb 03 10:30:01 crc kubenswrapper[5010]: I0203 10:30:01.141288 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" event={"ID":"34e554f0-be79-4c9c-974d-f25941ae930e","Type":"ContainerStarted","Data":"a12583ebc18635cfe4abc59f20a5088499fc468fa5cbdc945925543afdc66fa1"} Feb 03 10:30:01 crc kubenswrapper[5010]: I0203 10:30:01.144279 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"543f315d-d2f8-497f-a2c1-1a929c1611be","Type":"ContainerStarted","Data":"dd4807d6c0736ad636d34b769cd1839372915e22b697abfb3ff750b12a7a18fc"} Feb 03 10:30:01 crc kubenswrapper[5010]: I0203 10:30:01.145613 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 03 10:30:01 crc kubenswrapper[5010]: I0203 10:30:01.148122 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf","Type":"ContainerStarted","Data":"bf8498e9e77d45722feb55d8cf9c2655523b1106b4098f04a3b76453dfa0da9a"} Feb 03 10:30:01 crc kubenswrapper[5010]: I0203 10:30:01.148744 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:30:01 crc kubenswrapper[5010]: E0203 10:30:01.148881 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" podUID="43ecdc43-d866-4902-89cb-0ce68e89fe05" Feb 03 10:30:01 crc kubenswrapper[5010]: I0203 10:30:01.175531 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=47.175507889 podStartE2EDuration="47.175507889s" podCreationTimestamp="2026-02-03 10:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:30:01.170570833 +0000 UTC m=+1671.326546952" watchObservedRunningTime="2026-02-03 10:30:01.175507889 +0000 UTC m=+1671.331484018" Feb 03 10:30:01 crc kubenswrapper[5010]: I0203 10:30:01.226229 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=47.226195612 podStartE2EDuration="47.226195612s" podCreationTimestamp="2026-02-03 10:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:30:01.222059037 +0000 UTC m=+1671.378035166" watchObservedRunningTime="2026-02-03 10:30:01.226195612 +0000 UTC m=+1671.382171741" Feb 03 10:30:02 crc kubenswrapper[5010]: I0203 10:30:02.158937 5010 generic.go:334] "Generic (PLEG): container finished" podID="34e554f0-be79-4c9c-974d-f25941ae930e" containerID="50c1d73139063edd3d9e95aeb676f19fdb661e56cb93f7dad0c5a0ed756233ca" exitCode=0 Feb 03 10:30:02 crc kubenswrapper[5010]: I0203 10:30:02.159047 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" event={"ID":"34e554f0-be79-4c9c-974d-f25941ae930e","Type":"ContainerDied","Data":"50c1d73139063edd3d9e95aeb676f19fdb661e56cb93f7dad0c5a0ed756233ca"} Feb 03 10:30:03 crc kubenswrapper[5010]: I0203 10:30:03.482975 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:03 crc kubenswrapper[5010]: I0203 10:30:03.634743 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8czt\" (UniqueName: \"kubernetes.io/projected/34e554f0-be79-4c9c-974d-f25941ae930e-kube-api-access-c8czt\") pod \"34e554f0-be79-4c9c-974d-f25941ae930e\" (UID: \"34e554f0-be79-4c9c-974d-f25941ae930e\") " Feb 03 10:30:03 crc kubenswrapper[5010]: I0203 10:30:03.636058 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34e554f0-be79-4c9c-974d-f25941ae930e-secret-volume\") pod \"34e554f0-be79-4c9c-974d-f25941ae930e\" (UID: \"34e554f0-be79-4c9c-974d-f25941ae930e\") " Feb 03 10:30:03 crc kubenswrapper[5010]: I0203 10:30:03.636334 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34e554f0-be79-4c9c-974d-f25941ae930e-config-volume\") pod \"34e554f0-be79-4c9c-974d-f25941ae930e\" (UID: \"34e554f0-be79-4c9c-974d-f25941ae930e\") " Feb 03 10:30:03 crc kubenswrapper[5010]: I0203 10:30:03.637201 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34e554f0-be79-4c9c-974d-f25941ae930e-config-volume" (OuterVolumeSpecName: "config-volume") pod "34e554f0-be79-4c9c-974d-f25941ae930e" (UID: "34e554f0-be79-4c9c-974d-f25941ae930e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:30:03 crc kubenswrapper[5010]: I0203 10:30:03.642394 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e554f0-be79-4c9c-974d-f25941ae930e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "34e554f0-be79-4c9c-974d-f25941ae930e" (UID: "34e554f0-be79-4c9c-974d-f25941ae930e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:30:03 crc kubenswrapper[5010]: I0203 10:30:03.645843 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e554f0-be79-4c9c-974d-f25941ae930e-kube-api-access-c8czt" (OuterVolumeSpecName: "kube-api-access-c8czt") pod "34e554f0-be79-4c9c-974d-f25941ae930e" (UID: "34e554f0-be79-4c9c-974d-f25941ae930e"). InnerVolumeSpecName "kube-api-access-c8czt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:30:03 crc kubenswrapper[5010]: I0203 10:30:03.738641 5010 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34e554f0-be79-4c9c-974d-f25941ae930e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 10:30:03 crc kubenswrapper[5010]: I0203 10:30:03.738687 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8czt\" (UniqueName: \"kubernetes.io/projected/34e554f0-be79-4c9c-974d-f25941ae930e-kube-api-access-c8czt\") on node \"crc\" DevicePath \"\"" Feb 03 10:30:03 crc kubenswrapper[5010]: I0203 10:30:03.738697 5010 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34e554f0-be79-4c9c-974d-f25941ae930e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 10:30:04 crc kubenswrapper[5010]: I0203 10:30:04.185311 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" event={"ID":"34e554f0-be79-4c9c-974d-f25941ae930e","Type":"ContainerDied","Data":"a12583ebc18635cfe4abc59f20a5088499fc468fa5cbdc945925543afdc66fa1"} Feb 03 10:30:04 crc kubenswrapper[5010]: I0203 10:30:04.185376 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a12583ebc18635cfe4abc59f20a5088499fc468fa5cbdc945925543afdc66fa1" Feb 03 10:30:04 crc kubenswrapper[5010]: I0203 10:30:04.185437 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:12 crc kubenswrapper[5010]: I0203 10:30:12.221556 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:30:13 crc kubenswrapper[5010]: I0203 10:30:13.272937 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" event={"ID":"43ecdc43-d866-4902-89cb-0ce68e89fe05","Type":"ContainerStarted","Data":"532c0063bf8daca6dcc284fc64ff56a88aee7dc3a78ab9eb4836585e9d528bda"} Feb 03 10:30:13 crc kubenswrapper[5010]: I0203 10:30:13.297135 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" podStartSLOduration=2.761535313 podStartE2EDuration="28.297112397s" podCreationTimestamp="2026-02-03 10:29:45 +0000 UTC" firstStartedPulling="2026-02-03 10:29:46.683804034 +0000 UTC m=+1656.839780163" lastFinishedPulling="2026-02-03 10:30:12.219381118 +0000 UTC m=+1682.375357247" observedRunningTime="2026-02-03 10:30:13.288535618 +0000 UTC m=+1683.444511767" watchObservedRunningTime="2026-02-03 10:30:13.297112397 +0000 UTC m=+1683.453088526" Feb 03 10:30:13 crc kubenswrapper[5010]: I0203 10:30:13.504122 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:30:13 crc kubenswrapper[5010]: E0203 10:30:13.505387 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:30:15 crc kubenswrapper[5010]: I0203 10:30:15.138448 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 03 10:30:15 crc kubenswrapper[5010]: I0203 10:30:15.351467 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:30:19 crc kubenswrapper[5010]: I0203 10:30:19.652732 5010 scope.go:117] "RemoveContainer" containerID="387dd9fd0160568ebec8f1a6d5d1c5088020bf051ddedc665506a7243fc7b05d" Feb 03 10:30:19 crc kubenswrapper[5010]: I0203 10:30:19.686997 5010 scope.go:117] "RemoveContainer" containerID="ecc134dc06388d88bee9d6893b38c4e64f29d454add40ba84636bf94ef646d8a" Feb 03 10:30:25 crc kubenswrapper[5010]: I0203 10:30:25.381572 5010 generic.go:334] "Generic (PLEG): container finished" podID="43ecdc43-d866-4902-89cb-0ce68e89fe05" containerID="532c0063bf8daca6dcc284fc64ff56a88aee7dc3a78ab9eb4836585e9d528bda" exitCode=0 Feb 03 10:30:25 crc kubenswrapper[5010]: I0203 10:30:25.381661 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" event={"ID":"43ecdc43-d866-4902-89cb-0ce68e89fe05","Type":"ContainerDied","Data":"532c0063bf8daca6dcc284fc64ff56a88aee7dc3a78ab9eb4836585e9d528bda"} Feb 03 10:30:25 crc kubenswrapper[5010]: I0203 10:30:25.502882 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:30:25 crc kubenswrapper[5010]: E0203 10:30:25.503184 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:30:26 crc kubenswrapper[5010]: I0203 10:30:26.869870 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:30:26 crc kubenswrapper[5010]: I0203 10:30:26.996334 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsh87\" (UniqueName: \"kubernetes.io/projected/43ecdc43-d866-4902-89cb-0ce68e89fe05-kube-api-access-rsh87\") pod \"43ecdc43-d866-4902-89cb-0ce68e89fe05\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " Feb 03 10:30:26 crc kubenswrapper[5010]: I0203 10:30:26.996550 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-ssh-key-openstack-edpm-ipam\") pod \"43ecdc43-d866-4902-89cb-0ce68e89fe05\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " Feb 03 10:30:26 crc kubenswrapper[5010]: I0203 10:30:26.997415 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-repo-setup-combined-ca-bundle\") pod \"43ecdc43-d866-4902-89cb-0ce68e89fe05\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " Feb 03 10:30:26 crc kubenswrapper[5010]: I0203 10:30:26.997481 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-inventory\") pod \"43ecdc43-d866-4902-89cb-0ce68e89fe05\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.003020 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "43ecdc43-d866-4902-89cb-0ce68e89fe05" (UID: "43ecdc43-d866-4902-89cb-0ce68e89fe05"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.009698 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43ecdc43-d866-4902-89cb-0ce68e89fe05-kube-api-access-rsh87" (OuterVolumeSpecName: "kube-api-access-rsh87") pod "43ecdc43-d866-4902-89cb-0ce68e89fe05" (UID: "43ecdc43-d866-4902-89cb-0ce68e89fe05"). InnerVolumeSpecName "kube-api-access-rsh87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.025906 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "43ecdc43-d866-4902-89cb-0ce68e89fe05" (UID: "43ecdc43-d866-4902-89cb-0ce68e89fe05"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.033053 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-inventory" (OuterVolumeSpecName: "inventory") pod "43ecdc43-d866-4902-89cb-0ce68e89fe05" (UID: "43ecdc43-d866-4902-89cb-0ce68e89fe05"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.100127 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.100165 5010 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.100175 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.100187 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsh87\" (UniqueName: \"kubernetes.io/projected/43ecdc43-d866-4902-89cb-0ce68e89fe05-kube-api-access-rsh87\") on node \"crc\" DevicePath \"\"" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.423821 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" event={"ID":"43ecdc43-d866-4902-89cb-0ce68e89fe05","Type":"ContainerDied","Data":"77fbac41963512257d1526ae37ef85f2001ddf70c4b35586b4cb448e373c633b"} Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.423864 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77fbac41963512257d1526ae37ef85f2001ddf70c4b35586b4cb448e373c633b" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.423918 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.573539 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk"] Feb 03 10:30:27 crc kubenswrapper[5010]: E0203 10:30:27.573975 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43ecdc43-d866-4902-89cb-0ce68e89fe05" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.573993 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="43ecdc43-d866-4902-89cb-0ce68e89fe05" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 03 10:30:27 crc kubenswrapper[5010]: E0203 10:30:27.574002 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e554f0-be79-4c9c-974d-f25941ae930e" containerName="collect-profiles" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.574009 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e554f0-be79-4c9c-974d-f25941ae930e" containerName="collect-profiles" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.574187 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="43ecdc43-d866-4902-89cb-0ce68e89fe05" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.574205 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="34e554f0-be79-4c9c-974d-f25941ae930e" containerName="collect-profiles" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.574811 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.579613 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.579659 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.579748 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.579830 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.592446 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk"] Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.720466 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36d3f978-a301-44e6-a401-72e94c9f70ad-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r8zqk\" (UID: \"36d3f978-a301-44e6-a401-72e94c9f70ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.720830 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkq5x\" (UniqueName: \"kubernetes.io/projected/36d3f978-a301-44e6-a401-72e94c9f70ad-kube-api-access-gkq5x\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r8zqk\" (UID: \"36d3f978-a301-44e6-a401-72e94c9f70ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.720879 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d3f978-a301-44e6-a401-72e94c9f70ad-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r8zqk\" (UID: \"36d3f978-a301-44e6-a401-72e94c9f70ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.822728 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36d3f978-a301-44e6-a401-72e94c9f70ad-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r8zqk\" (UID: \"36d3f978-a301-44e6-a401-72e94c9f70ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.822800 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkq5x\" (UniqueName: \"kubernetes.io/projected/36d3f978-a301-44e6-a401-72e94c9f70ad-kube-api-access-gkq5x\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r8zqk\" (UID: \"36d3f978-a301-44e6-a401-72e94c9f70ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.822858 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d3f978-a301-44e6-a401-72e94c9f70ad-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r8zqk\" (UID: \"36d3f978-a301-44e6-a401-72e94c9f70ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.830441 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d3f978-a301-44e6-a401-72e94c9f70ad-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r8zqk\" (UID: \"36d3f978-a301-44e6-a401-72e94c9f70ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.833149 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36d3f978-a301-44e6-a401-72e94c9f70ad-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r8zqk\" (UID: \"36d3f978-a301-44e6-a401-72e94c9f70ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.840126 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkq5x\" (UniqueName: \"kubernetes.io/projected/36d3f978-a301-44e6-a401-72e94c9f70ad-kube-api-access-gkq5x\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r8zqk\" (UID: \"36d3f978-a301-44e6-a401-72e94c9f70ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.893837 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:28 crc kubenswrapper[5010]: I0203 10:30:28.441590 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk"] Feb 03 10:30:28 crc kubenswrapper[5010]: I0203 10:30:28.455205 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 10:30:29 crc kubenswrapper[5010]: I0203 10:30:29.441002 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" event={"ID":"36d3f978-a301-44e6-a401-72e94c9f70ad","Type":"ContainerStarted","Data":"ae6a116bb479bd12b5c8f968f81170c52418ccece8e5dc2d957f317923c84955"} Feb 03 10:30:30 crc kubenswrapper[5010]: I0203 10:30:30.455105 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" event={"ID":"36d3f978-a301-44e6-a401-72e94c9f70ad","Type":"ContainerStarted","Data":"520e85302ebeae40d4d393da385fd7d92cc796319d6b0edc6e78b25df2accb20"} Feb 03 10:30:30 crc kubenswrapper[5010]: I0203 10:30:30.476376 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" podStartSLOduration=2.645414555 podStartE2EDuration="3.476357958s" podCreationTimestamp="2026-02-03 10:30:27 +0000 UTC" firstStartedPulling="2026-02-03 10:30:28.455012891 +0000 UTC m=+1698.610989020" lastFinishedPulling="2026-02-03 10:30:29.285956294 +0000 UTC m=+1699.441932423" observedRunningTime="2026-02-03 10:30:30.472037028 +0000 UTC m=+1700.628013177" watchObservedRunningTime="2026-02-03 10:30:30.476357958 +0000 UTC m=+1700.632334087" Feb 03 10:30:32 crc kubenswrapper[5010]: I0203 10:30:32.476630 5010 generic.go:334] "Generic (PLEG): container finished" podID="36d3f978-a301-44e6-a401-72e94c9f70ad" containerID="520e85302ebeae40d4d393da385fd7d92cc796319d6b0edc6e78b25df2accb20" exitCode=0 Feb 03 10:30:32 crc kubenswrapper[5010]: I0203 10:30:32.476704 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" event={"ID":"36d3f978-a301-44e6-a401-72e94c9f70ad","Type":"ContainerDied","Data":"520e85302ebeae40d4d393da385fd7d92cc796319d6b0edc6e78b25df2accb20"} Feb 03 10:30:33 crc kubenswrapper[5010]: I0203 10:30:33.936167 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.047996 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d3f978-a301-44e6-a401-72e94c9f70ad-inventory\") pod \"36d3f978-a301-44e6-a401-72e94c9f70ad\" (UID: \"36d3f978-a301-44e6-a401-72e94c9f70ad\") " Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.048382 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkq5x\" (UniqueName: \"kubernetes.io/projected/36d3f978-a301-44e6-a401-72e94c9f70ad-kube-api-access-gkq5x\") pod \"36d3f978-a301-44e6-a401-72e94c9f70ad\" (UID: \"36d3f978-a301-44e6-a401-72e94c9f70ad\") " Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.048660 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36d3f978-a301-44e6-a401-72e94c9f70ad-ssh-key-openstack-edpm-ipam\") pod \"36d3f978-a301-44e6-a401-72e94c9f70ad\" (UID: \"36d3f978-a301-44e6-a401-72e94c9f70ad\") " Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.054062 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36d3f978-a301-44e6-a401-72e94c9f70ad-kube-api-access-gkq5x" (OuterVolumeSpecName: "kube-api-access-gkq5x") pod "36d3f978-a301-44e6-a401-72e94c9f70ad" (UID: "36d3f978-a301-44e6-a401-72e94c9f70ad"). InnerVolumeSpecName "kube-api-access-gkq5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.075175 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d3f978-a301-44e6-a401-72e94c9f70ad-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "36d3f978-a301-44e6-a401-72e94c9f70ad" (UID: "36d3f978-a301-44e6-a401-72e94c9f70ad"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.081912 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d3f978-a301-44e6-a401-72e94c9f70ad-inventory" (OuterVolumeSpecName: "inventory") pod "36d3f978-a301-44e6-a401-72e94c9f70ad" (UID: "36d3f978-a301-44e6-a401-72e94c9f70ad"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.151185 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36d3f978-a301-44e6-a401-72e94c9f70ad-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.151558 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d3f978-a301-44e6-a401-72e94c9f70ad-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.151578 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkq5x\" (UniqueName: \"kubernetes.io/projected/36d3f978-a301-44e6-a401-72e94c9f70ad-kube-api-access-gkq5x\") on node \"crc\" DevicePath \"\"" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.493641 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" event={"ID":"36d3f978-a301-44e6-a401-72e94c9f70ad","Type":"ContainerDied","Data":"ae6a116bb479bd12b5c8f968f81170c52418ccece8e5dc2d957f317923c84955"} Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.493687 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae6a116bb479bd12b5c8f968f81170c52418ccece8e5dc2d957f317923c84955" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.493763 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.568659 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf"] Feb 03 10:30:34 crc kubenswrapper[5010]: E0203 10:30:34.569166 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d3f978-a301-44e6-a401-72e94c9f70ad" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.569190 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d3f978-a301-44e6-a401-72e94c9f70ad" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.569478 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d3f978-a301-44e6-a401-72e94c9f70ad" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.570321 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.573521 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.575549 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.575595 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.575973 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.582087 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf"] Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.661075 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmtk2\" (UniqueName: \"kubernetes.io/projected/2d389772-7902-4aca-8bc3-03a0708fbaa2-kube-api-access-jmtk2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.661125 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.661325 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.661440 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.763682 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.763839 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmtk2\" (UniqueName: \"kubernetes.io/projected/2d389772-7902-4aca-8bc3-03a0708fbaa2-kube-api-access-jmtk2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.763877 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.763963 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.769547 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.770152 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.778814 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.782014 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmtk2\" (UniqueName: \"kubernetes.io/projected/2d389772-7902-4aca-8bc3-03a0708fbaa2-kube-api-access-jmtk2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.892866 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:35 crc kubenswrapper[5010]: I0203 10:30:35.494185 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf"] Feb 03 10:30:36 crc kubenswrapper[5010]: I0203 10:30:36.514013 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" event={"ID":"2d389772-7902-4aca-8bc3-03a0708fbaa2","Type":"ContainerStarted","Data":"1c3d5f240ee62be6fa51825a10963f07b9c3d37c85ce03fca5f277444b1d0397"} Feb 03 10:30:36 crc kubenswrapper[5010]: I0203 10:30:36.514661 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" event={"ID":"2d389772-7902-4aca-8bc3-03a0708fbaa2","Type":"ContainerStarted","Data":"2ef65aac28dddf89deb7ce485b857019655fec507cad6ee360424ff04f3a20c1"} Feb 03 10:30:36 crc kubenswrapper[5010]: I0203 10:30:36.542053 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" podStartSLOduration=2.059071199 podStartE2EDuration="2.542028253s" podCreationTimestamp="2026-02-03 10:30:34 +0000 UTC" firstStartedPulling="2026-02-03 10:30:35.499025859 +0000 UTC m=+1705.655001988" lastFinishedPulling="2026-02-03 10:30:35.981982863 +0000 UTC m=+1706.137959042" observedRunningTime="2026-02-03 10:30:36.534704536 +0000 UTC m=+1706.690680665" watchObservedRunningTime="2026-02-03 10:30:36.542028253 +0000 UTC m=+1706.698004382" Feb 03 10:30:37 crc kubenswrapper[5010]: I0203 10:30:37.503005 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:30:37 crc kubenswrapper[5010]: E0203 10:30:37.503727 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:30:51 crc kubenswrapper[5010]: I0203 10:30:51.503116 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:30:51 crc kubenswrapper[5010]: E0203 10:30:51.504152 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:31:02 crc kubenswrapper[5010]: I0203 10:31:02.503281 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:31:02 crc kubenswrapper[5010]: E0203 10:31:02.504099 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:31:14 crc kubenswrapper[5010]: I0203 10:31:14.502820 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:31:14 crc kubenswrapper[5010]: E0203 10:31:14.504263 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:31:19 crc kubenswrapper[5010]: I0203 10:31:19.851304 5010 scope.go:117] "RemoveContainer" containerID="284a769b3c25b0cdea9e5ddf661cc8aed190c024694193ebf7516c57518d0765" Feb 03 10:31:29 crc kubenswrapper[5010]: I0203 10:31:29.501967 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:31:29 crc kubenswrapper[5010]: E0203 10:31:29.502878 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:31:41 crc kubenswrapper[5010]: I0203 10:31:41.502771 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:31:41 crc kubenswrapper[5010]: E0203 10:31:41.503490 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:31:53 crc kubenswrapper[5010]: I0203 10:31:53.502308 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:31:53 crc kubenswrapper[5010]: E0203 10:31:53.503826 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:32:08 crc kubenswrapper[5010]: I0203 10:32:08.503319 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:32:08 crc kubenswrapper[5010]: E0203 10:32:08.504106 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:32:19 crc kubenswrapper[5010]: I0203 10:32:19.981589 5010 scope.go:117] "RemoveContainer" containerID="204ff7b5906df6362a9178ddb04b60b73173622cbd63d2c7b2264912f116e282" Feb 03 10:32:20 crc kubenswrapper[5010]: I0203 10:32:20.056869 5010 scope.go:117] "RemoveContainer" containerID="4198ce459a693b38bf47283f126a3f929ce83d42492541b2b961db5cda2701f4" Feb 03 10:32:20 crc kubenswrapper[5010]: I0203 10:32:20.103324 5010 scope.go:117] "RemoveContainer" containerID="1bd8603024a229914190fc469345835e8b37de52fd7f1951f53bc0059a29de92" Feb 03 10:32:20 crc kubenswrapper[5010]: I0203 10:32:20.127711 5010 scope.go:117] "RemoveContainer" containerID="67d6ea389313e14d97c8b6c045808e3c44adad70ca29d47d5585704fabd03630" Feb 03 10:32:20 crc kubenswrapper[5010]: I0203 10:32:20.509817 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:32:20 crc kubenswrapper[5010]: E0203 10:32:20.510189 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:32:31 crc kubenswrapper[5010]: I0203 10:32:31.502394 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:32:31 crc kubenswrapper[5010]: E0203 10:32:31.503045 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:32:43 crc kubenswrapper[5010]: I0203 10:32:43.502466 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:32:43 crc kubenswrapper[5010]: E0203 10:32:43.503265 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:32:57 crc kubenswrapper[5010]: I0203 10:32:57.502477 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:32:57 crc kubenswrapper[5010]: E0203 10:32:57.503648 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.119929 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n5pfd"] Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.126024 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.135635 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5pfd"] Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.232693 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2d67207-8c20-4786-abde-621b94eada73-utilities\") pod \"redhat-marketplace-n5pfd\" (UID: \"f2d67207-8c20-4786-abde-621b94eada73\") " pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.232837 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-989jz\" (UniqueName: \"kubernetes.io/projected/f2d67207-8c20-4786-abde-621b94eada73-kube-api-access-989jz\") pod \"redhat-marketplace-n5pfd\" (UID: \"f2d67207-8c20-4786-abde-621b94eada73\") " pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.233272 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2d67207-8c20-4786-abde-621b94eada73-catalog-content\") pod \"redhat-marketplace-n5pfd\" (UID: \"f2d67207-8c20-4786-abde-621b94eada73\") " pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.309619 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k5k8q"] Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.312459 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.336503 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2d67207-8c20-4786-abde-621b94eada73-utilities\") pod \"redhat-marketplace-n5pfd\" (UID: \"f2d67207-8c20-4786-abde-621b94eada73\") " pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.336591 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-989jz\" (UniqueName: \"kubernetes.io/projected/f2d67207-8c20-4786-abde-621b94eada73-kube-api-access-989jz\") pod \"redhat-marketplace-n5pfd\" (UID: \"f2d67207-8c20-4786-abde-621b94eada73\") " pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.336683 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2d67207-8c20-4786-abde-621b94eada73-catalog-content\") pod \"redhat-marketplace-n5pfd\" (UID: \"f2d67207-8c20-4786-abde-621b94eada73\") " pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.337890 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k5k8q"] Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.340096 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2d67207-8c20-4786-abde-621b94eada73-catalog-content\") pod \"redhat-marketplace-n5pfd\" (UID: \"f2d67207-8c20-4786-abde-621b94eada73\") " pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.343110 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2d67207-8c20-4786-abde-621b94eada73-utilities\") pod \"redhat-marketplace-n5pfd\" (UID: \"f2d67207-8c20-4786-abde-621b94eada73\") " pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.378175 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-989jz\" (UniqueName: \"kubernetes.io/projected/f2d67207-8c20-4786-abde-621b94eada73-kube-api-access-989jz\") pod \"redhat-marketplace-n5pfd\" (UID: \"f2d67207-8c20-4786-abde-621b94eada73\") " pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.440186 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-utilities\") pod \"community-operators-k5k8q\" (UID: \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\") " pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.440521 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjzvf\" (UniqueName: \"kubernetes.io/projected/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-kube-api-access-sjzvf\") pod \"community-operators-k5k8q\" (UID: \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\") " pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.440972 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-catalog-content\") pod \"community-operators-k5k8q\" (UID: \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\") " pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.477524 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.543816 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjzvf\" (UniqueName: \"kubernetes.io/projected/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-kube-api-access-sjzvf\") pod \"community-operators-k5k8q\" (UID: \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\") " pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.544018 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-catalog-content\") pod \"community-operators-k5k8q\" (UID: \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\") " pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.544094 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-utilities\") pod \"community-operators-k5k8q\" (UID: \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\") " pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.545080 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-utilities\") pod \"community-operators-k5k8q\" (UID: \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\") " pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.545156 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-catalog-content\") pod \"community-operators-k5k8q\" (UID: \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\") " pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.572762 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjzvf\" (UniqueName: \"kubernetes.io/projected/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-kube-api-access-sjzvf\") pod \"community-operators-k5k8q\" (UID: \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\") " pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.644350 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:06 crc kubenswrapper[5010]: I0203 10:33:06.401141 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k5k8q"] Feb 03 10:33:06 crc kubenswrapper[5010]: I0203 10:33:06.451547 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5pfd"] Feb 03 10:33:07 crc kubenswrapper[5010]: I0203 10:33:07.441544 5010 generic.go:334] "Generic (PLEG): container finished" podID="f2d67207-8c20-4786-abde-621b94eada73" containerID="04c9cc5a5a4cd6d4d704aec7a40619ebb2db979bd0973bc85bd4a92113b70fb3" exitCode=0 Feb 03 10:33:07 crc kubenswrapper[5010]: I0203 10:33:07.441747 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5pfd" event={"ID":"f2d67207-8c20-4786-abde-621b94eada73","Type":"ContainerDied","Data":"04c9cc5a5a4cd6d4d704aec7a40619ebb2db979bd0973bc85bd4a92113b70fb3"} Feb 03 10:33:07 crc kubenswrapper[5010]: I0203 10:33:07.442722 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5pfd" event={"ID":"f2d67207-8c20-4786-abde-621b94eada73","Type":"ContainerStarted","Data":"8b3b270bebd4977e84cdc37b71c34f9d391c7521c5a7a8426582efd5470a62cc"} Feb 03 10:33:07 crc kubenswrapper[5010]: I0203 10:33:07.445530 5010 generic.go:334] "Generic (PLEG): container finished" podID="07b694c8-ca4a-4c06-9a6a-786e7f8501fc" containerID="7c65ecc4d1675be30d5f625779c17a3952d9b47b1f7c37ee2e9b05592b3c8ca5" exitCode=0 Feb 03 10:33:07 crc kubenswrapper[5010]: I0203 10:33:07.445598 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5k8q" event={"ID":"07b694c8-ca4a-4c06-9a6a-786e7f8501fc","Type":"ContainerDied","Data":"7c65ecc4d1675be30d5f625779c17a3952d9b47b1f7c37ee2e9b05592b3c8ca5"} Feb 03 10:33:07 crc kubenswrapper[5010]: I0203 10:33:07.445638 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5k8q" event={"ID":"07b694c8-ca4a-4c06-9a6a-786e7f8501fc","Type":"ContainerStarted","Data":"5b727abf7e342cd4d1d4e63479302a3e7250e0f31e5c2175523f9baf9010f5bf"} Feb 03 10:33:08 crc kubenswrapper[5010]: I0203 10:33:08.465195 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5pfd" event={"ID":"f2d67207-8c20-4786-abde-621b94eada73","Type":"ContainerStarted","Data":"d1fb3dce7267d3dfebecfa9527e3d582e6bc631c65cea833b64ea325f9d1e697"} Feb 03 10:33:08 crc kubenswrapper[5010]: I0203 10:33:08.469721 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5k8q" event={"ID":"07b694c8-ca4a-4c06-9a6a-786e7f8501fc","Type":"ContainerStarted","Data":"7796bd8573df93a232f70ba25873c3b6ed23dfeb6afefe573eb43ec3546bd49e"} Feb 03 10:33:09 crc kubenswrapper[5010]: I0203 10:33:09.480913 5010 generic.go:334] "Generic (PLEG): container finished" podID="f2d67207-8c20-4786-abde-621b94eada73" containerID="d1fb3dce7267d3dfebecfa9527e3d582e6bc631c65cea833b64ea325f9d1e697" exitCode=0 Feb 03 10:33:09 crc kubenswrapper[5010]: I0203 10:33:09.481024 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5pfd" event={"ID":"f2d67207-8c20-4786-abde-621b94eada73","Type":"ContainerDied","Data":"d1fb3dce7267d3dfebecfa9527e3d582e6bc631c65cea833b64ea325f9d1e697"} Feb 03 10:33:09 crc kubenswrapper[5010]: I0203 10:33:09.484188 5010 generic.go:334] "Generic (PLEG): container finished" podID="07b694c8-ca4a-4c06-9a6a-786e7f8501fc" containerID="7796bd8573df93a232f70ba25873c3b6ed23dfeb6afefe573eb43ec3546bd49e" exitCode=0 Feb 03 10:33:09 crc kubenswrapper[5010]: I0203 10:33:09.484247 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5k8q" event={"ID":"07b694c8-ca4a-4c06-9a6a-786e7f8501fc","Type":"ContainerDied","Data":"7796bd8573df93a232f70ba25873c3b6ed23dfeb6afefe573eb43ec3546bd49e"} Feb 03 10:33:09 crc kubenswrapper[5010]: I0203 10:33:09.503141 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:33:09 crc kubenswrapper[5010]: E0203 10:33:09.503709 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:33:11 crc kubenswrapper[5010]: I0203 10:33:11.515508 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5pfd" event={"ID":"f2d67207-8c20-4786-abde-621b94eada73","Type":"ContainerStarted","Data":"13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938"} Feb 03 10:33:11 crc kubenswrapper[5010]: I0203 10:33:11.518638 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5k8q" event={"ID":"07b694c8-ca4a-4c06-9a6a-786e7f8501fc","Type":"ContainerStarted","Data":"10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb"} Feb 03 10:33:11 crc kubenswrapper[5010]: I0203 10:33:11.550707 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n5pfd" podStartSLOduration=3.593418332 podStartE2EDuration="6.55065389s" podCreationTimestamp="2026-02-03 10:33:05 +0000 UTC" firstStartedPulling="2026-02-03 10:33:07.445794012 +0000 UTC m=+1857.601770141" lastFinishedPulling="2026-02-03 10:33:10.40302957 +0000 UTC m=+1860.559005699" observedRunningTime="2026-02-03 10:33:11.544119643 +0000 UTC m=+1861.700095772" watchObservedRunningTime="2026-02-03 10:33:11.55065389 +0000 UTC m=+1861.706630019" Feb 03 10:33:11 crc kubenswrapper[5010]: I0203 10:33:11.580911 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k5k8q" podStartSLOduration=3.521801696 podStartE2EDuration="6.580890736s" podCreationTimestamp="2026-02-03 10:33:05 +0000 UTC" firstStartedPulling="2026-02-03 10:33:07.448956804 +0000 UTC m=+1857.604932933" lastFinishedPulling="2026-02-03 10:33:10.508045844 +0000 UTC m=+1860.664021973" observedRunningTime="2026-02-03 10:33:11.573584169 +0000 UTC m=+1861.729560308" watchObservedRunningTime="2026-02-03 10:33:11.580890736 +0000 UTC m=+1861.736866865" Feb 03 10:33:15 crc kubenswrapper[5010]: I0203 10:33:15.478123 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:15 crc kubenswrapper[5010]: I0203 10:33:15.479065 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:15 crc kubenswrapper[5010]: I0203 10:33:15.536345 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:15 crc kubenswrapper[5010]: I0203 10:33:15.651656 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:15 crc kubenswrapper[5010]: I0203 10:33:15.659890 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:15 crc kubenswrapper[5010]: I0203 10:33:15.661818 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:15 crc kubenswrapper[5010]: I0203 10:33:15.723298 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:16 crc kubenswrapper[5010]: I0203 10:33:16.630680 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:17 crc kubenswrapper[5010]: I0203 10:33:17.898272 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5pfd"] Feb 03 10:33:17 crc kubenswrapper[5010]: I0203 10:33:17.898833 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n5pfd" podUID="f2d67207-8c20-4786-abde-621b94eada73" containerName="registry-server" containerID="cri-o://13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938" gracePeriod=2 Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.094050 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k5k8q"] Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.370123 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.409362 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2d67207-8c20-4786-abde-621b94eada73-utilities\") pod \"f2d67207-8c20-4786-abde-621b94eada73\" (UID: \"f2d67207-8c20-4786-abde-621b94eada73\") " Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.409539 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-989jz\" (UniqueName: \"kubernetes.io/projected/f2d67207-8c20-4786-abde-621b94eada73-kube-api-access-989jz\") pod \"f2d67207-8c20-4786-abde-621b94eada73\" (UID: \"f2d67207-8c20-4786-abde-621b94eada73\") " Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.409593 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2d67207-8c20-4786-abde-621b94eada73-catalog-content\") pod \"f2d67207-8c20-4786-abde-621b94eada73\" (UID: \"f2d67207-8c20-4786-abde-621b94eada73\") " Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.412060 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2d67207-8c20-4786-abde-621b94eada73-utilities" (OuterVolumeSpecName: "utilities") pod "f2d67207-8c20-4786-abde-621b94eada73" (UID: "f2d67207-8c20-4786-abde-621b94eada73"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.424422 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2d67207-8c20-4786-abde-621b94eada73-kube-api-access-989jz" (OuterVolumeSpecName: "kube-api-access-989jz") pod "f2d67207-8c20-4786-abde-621b94eada73" (UID: "f2d67207-8c20-4786-abde-621b94eada73"). InnerVolumeSpecName "kube-api-access-989jz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.437563 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2d67207-8c20-4786-abde-621b94eada73-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2d67207-8c20-4786-abde-621b94eada73" (UID: "f2d67207-8c20-4786-abde-621b94eada73"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.511818 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2d67207-8c20-4786-abde-621b94eada73-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.511851 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-989jz\" (UniqueName: \"kubernetes.io/projected/f2d67207-8c20-4786-abde-621b94eada73-kube-api-access-989jz\") on node \"crc\" DevicePath \"\"" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.511864 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2d67207-8c20-4786-abde-621b94eada73-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.608504 5010 generic.go:334] "Generic (PLEG): container finished" podID="f2d67207-8c20-4786-abde-621b94eada73" containerID="13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938" exitCode=0 Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.608605 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.608608 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5pfd" event={"ID":"f2d67207-8c20-4786-abde-621b94eada73","Type":"ContainerDied","Data":"13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938"} Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.608672 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5pfd" event={"ID":"f2d67207-8c20-4786-abde-621b94eada73","Type":"ContainerDied","Data":"8b3b270bebd4977e84cdc37b71c34f9d391c7521c5a7a8426582efd5470a62cc"} Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.608698 5010 scope.go:117] "RemoveContainer" containerID="13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.640244 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5pfd"] Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.649473 5010 scope.go:117] "RemoveContainer" containerID="d1fb3dce7267d3dfebecfa9527e3d582e6bc631c65cea833b64ea325f9d1e697" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.650577 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5pfd"] Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.679010 5010 scope.go:117] "RemoveContainer" containerID="04c9cc5a5a4cd6d4d704aec7a40619ebb2db979bd0973bc85bd4a92113b70fb3" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.731828 5010 scope.go:117] "RemoveContainer" containerID="13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938" Feb 03 10:33:18 crc kubenswrapper[5010]: E0203 10:33:18.733165 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938\": container with ID starting with 13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938 not found: ID does not exist" containerID="13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.733258 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938"} err="failed to get container status \"13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938\": rpc error: code = NotFound desc = could not find container \"13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938\": container with ID starting with 13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938 not found: ID does not exist" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.733302 5010 scope.go:117] "RemoveContainer" containerID="d1fb3dce7267d3dfebecfa9527e3d582e6bc631c65cea833b64ea325f9d1e697" Feb 03 10:33:18 crc kubenswrapper[5010]: E0203 10:33:18.734287 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1fb3dce7267d3dfebecfa9527e3d582e6bc631c65cea833b64ea325f9d1e697\": container with ID starting with d1fb3dce7267d3dfebecfa9527e3d582e6bc631c65cea833b64ea325f9d1e697 not found: ID does not exist" containerID="d1fb3dce7267d3dfebecfa9527e3d582e6bc631c65cea833b64ea325f9d1e697" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.734441 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1fb3dce7267d3dfebecfa9527e3d582e6bc631c65cea833b64ea325f9d1e697"} err="failed to get container status \"d1fb3dce7267d3dfebecfa9527e3d582e6bc631c65cea833b64ea325f9d1e697\": rpc error: code = NotFound desc = could not find container \"d1fb3dce7267d3dfebecfa9527e3d582e6bc631c65cea833b64ea325f9d1e697\": container with ID starting with d1fb3dce7267d3dfebecfa9527e3d582e6bc631c65cea833b64ea325f9d1e697 not found: ID does not exist" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.734497 5010 scope.go:117] "RemoveContainer" containerID="04c9cc5a5a4cd6d4d704aec7a40619ebb2db979bd0973bc85bd4a92113b70fb3" Feb 03 10:33:18 crc kubenswrapper[5010]: E0203 10:33:18.735039 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04c9cc5a5a4cd6d4d704aec7a40619ebb2db979bd0973bc85bd4a92113b70fb3\": container with ID starting with 04c9cc5a5a4cd6d4d704aec7a40619ebb2db979bd0973bc85bd4a92113b70fb3 not found: ID does not exist" containerID="04c9cc5a5a4cd6d4d704aec7a40619ebb2db979bd0973bc85bd4a92113b70fb3" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.735082 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04c9cc5a5a4cd6d4d704aec7a40619ebb2db979bd0973bc85bd4a92113b70fb3"} err="failed to get container status \"04c9cc5a5a4cd6d4d704aec7a40619ebb2db979bd0973bc85bd4a92113b70fb3\": rpc error: code = NotFound desc = could not find container \"04c9cc5a5a4cd6d4d704aec7a40619ebb2db979bd0973bc85bd4a92113b70fb3\": container with ID starting with 04c9cc5a5a4cd6d4d704aec7a40619ebb2db979bd0973bc85bd4a92113b70fb3 not found: ID does not exist" Feb 03 10:33:19 crc kubenswrapper[5010]: I0203 10:33:19.622426 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k5k8q" podUID="07b694c8-ca4a-4c06-9a6a-786e7f8501fc" containerName="registry-server" containerID="cri-o://10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb" gracePeriod=2 Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.183805 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.290832 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-catalog-content\") pod \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\" (UID: \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\") " Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.290938 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-utilities\") pod \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\" (UID: \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\") " Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.290982 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjzvf\" (UniqueName: \"kubernetes.io/projected/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-kube-api-access-sjzvf\") pod \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\" (UID: \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\") " Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.291950 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-utilities" (OuterVolumeSpecName: "utilities") pod "07b694c8-ca4a-4c06-9a6a-786e7f8501fc" (UID: "07b694c8-ca4a-4c06-9a6a-786e7f8501fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.298389 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-kube-api-access-sjzvf" (OuterVolumeSpecName: "kube-api-access-sjzvf") pod "07b694c8-ca4a-4c06-9a6a-786e7f8501fc" (UID: "07b694c8-ca4a-4c06-9a6a-786e7f8501fc"). InnerVolumeSpecName "kube-api-access-sjzvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.350909 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07b694c8-ca4a-4c06-9a6a-786e7f8501fc" (UID: "07b694c8-ca4a-4c06-9a6a-786e7f8501fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.393352 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.393399 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjzvf\" (UniqueName: \"kubernetes.io/projected/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-kube-api-access-sjzvf\") on node \"crc\" DevicePath \"\"" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.393415 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.519802 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2d67207-8c20-4786-abde-621b94eada73" path="/var/lib/kubelet/pods/f2d67207-8c20-4786-abde-621b94eada73/volumes" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.637925 5010 generic.go:334] "Generic (PLEG): container finished" podID="07b694c8-ca4a-4c06-9a6a-786e7f8501fc" containerID="10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb" exitCode=0 Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.638040 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.638040 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5k8q" event={"ID":"07b694c8-ca4a-4c06-9a6a-786e7f8501fc","Type":"ContainerDied","Data":"10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb"} Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.638091 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5k8q" event={"ID":"07b694c8-ca4a-4c06-9a6a-786e7f8501fc","Type":"ContainerDied","Data":"5b727abf7e342cd4d1d4e63479302a3e7250e0f31e5c2175523f9baf9010f5bf"} Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.638115 5010 scope.go:117] "RemoveContainer" containerID="10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.666720 5010 scope.go:117] "RemoveContainer" containerID="7796bd8573df93a232f70ba25873c3b6ed23dfeb6afefe573eb43ec3546bd49e" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.684012 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k5k8q"] Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.712069 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k5k8q"] Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.724654 5010 scope.go:117] "RemoveContainer" containerID="7c65ecc4d1675be30d5f625779c17a3952d9b47b1f7c37ee2e9b05592b3c8ca5" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.766979 5010 scope.go:117] "RemoveContainer" containerID="10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb" Feb 03 10:33:20 crc kubenswrapper[5010]: E0203 10:33:20.767742 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb\": container with ID starting with 10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb not found: ID does not exist" containerID="10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.767956 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb"} err="failed to get container status \"10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb\": rpc error: code = NotFound desc = could not find container \"10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb\": container with ID starting with 10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb not found: ID does not exist" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.768092 5010 scope.go:117] "RemoveContainer" containerID="7796bd8573df93a232f70ba25873c3b6ed23dfeb6afefe573eb43ec3546bd49e" Feb 03 10:33:20 crc kubenswrapper[5010]: E0203 10:33:20.768704 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7796bd8573df93a232f70ba25873c3b6ed23dfeb6afefe573eb43ec3546bd49e\": container with ID starting with 7796bd8573df93a232f70ba25873c3b6ed23dfeb6afefe573eb43ec3546bd49e not found: ID does not exist" containerID="7796bd8573df93a232f70ba25873c3b6ed23dfeb6afefe573eb43ec3546bd49e" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.768740 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7796bd8573df93a232f70ba25873c3b6ed23dfeb6afefe573eb43ec3546bd49e"} err="failed to get container status \"7796bd8573df93a232f70ba25873c3b6ed23dfeb6afefe573eb43ec3546bd49e\": rpc error: code = NotFound desc = could not find container \"7796bd8573df93a232f70ba25873c3b6ed23dfeb6afefe573eb43ec3546bd49e\": container with ID starting with 7796bd8573df93a232f70ba25873c3b6ed23dfeb6afefe573eb43ec3546bd49e not found: ID does not exist" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.768767 5010 scope.go:117] "RemoveContainer" containerID="7c65ecc4d1675be30d5f625779c17a3952d9b47b1f7c37ee2e9b05592b3c8ca5" Feb 03 10:33:20 crc kubenswrapper[5010]: E0203 10:33:20.769074 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c65ecc4d1675be30d5f625779c17a3952d9b47b1f7c37ee2e9b05592b3c8ca5\": container with ID starting with 7c65ecc4d1675be30d5f625779c17a3952d9b47b1f7c37ee2e9b05592b3c8ca5 not found: ID does not exist" containerID="7c65ecc4d1675be30d5f625779c17a3952d9b47b1f7c37ee2e9b05592b3c8ca5" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.769121 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c65ecc4d1675be30d5f625779c17a3952d9b47b1f7c37ee2e9b05592b3c8ca5"} err="failed to get container status \"7c65ecc4d1675be30d5f625779c17a3952d9b47b1f7c37ee2e9b05592b3c8ca5\": rpc error: code = NotFound desc = could not find container \"7c65ecc4d1675be30d5f625779c17a3952d9b47b1f7c37ee2e9b05592b3c8ca5\": container with ID starting with 7c65ecc4d1675be30d5f625779c17a3952d9b47b1f7c37ee2e9b05592b3c8ca5 not found: ID does not exist" Feb 03 10:33:21 crc kubenswrapper[5010]: I0203 10:33:21.503259 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:33:22 crc kubenswrapper[5010]: I0203 10:33:22.516011 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07b694c8-ca4a-4c06-9a6a-786e7f8501fc" path="/var/lib/kubelet/pods/07b694c8-ca4a-4c06-9a6a-786e7f8501fc/volumes" Feb 03 10:33:22 crc kubenswrapper[5010]: I0203 10:33:22.667567 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"5dc093ef0ed9c15b3f47adc87cdb7004279d6322628d13c278c955d2873bd2f0"} Feb 03 10:33:25 crc kubenswrapper[5010]: I0203 10:33:25.701673 5010 generic.go:334] "Generic (PLEG): container finished" podID="2d389772-7902-4aca-8bc3-03a0708fbaa2" containerID="1c3d5f240ee62be6fa51825a10963f07b9c3d37c85ce03fca5f277444b1d0397" exitCode=0 Feb 03 10:33:25 crc kubenswrapper[5010]: I0203 10:33:25.701765 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" event={"ID":"2d389772-7902-4aca-8bc3-03a0708fbaa2","Type":"ContainerDied","Data":"1c3d5f240ee62be6fa51825a10963f07b9c3d37c85ce03fca5f277444b1d0397"} Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.252611 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.374972 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-ssh-key-openstack-edpm-ipam\") pod \"2d389772-7902-4aca-8bc3-03a0708fbaa2\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.375564 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-bootstrap-combined-ca-bundle\") pod \"2d389772-7902-4aca-8bc3-03a0708fbaa2\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.375842 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-inventory\") pod \"2d389772-7902-4aca-8bc3-03a0708fbaa2\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.376008 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmtk2\" (UniqueName: \"kubernetes.io/projected/2d389772-7902-4aca-8bc3-03a0708fbaa2-kube-api-access-jmtk2\") pod \"2d389772-7902-4aca-8bc3-03a0708fbaa2\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.398704 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d389772-7902-4aca-8bc3-03a0708fbaa2-kube-api-access-jmtk2" (OuterVolumeSpecName: "kube-api-access-jmtk2") pod "2d389772-7902-4aca-8bc3-03a0708fbaa2" (UID: "2d389772-7902-4aca-8bc3-03a0708fbaa2"). InnerVolumeSpecName "kube-api-access-jmtk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.401516 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2d389772-7902-4aca-8bc3-03a0708fbaa2" (UID: "2d389772-7902-4aca-8bc3-03a0708fbaa2"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.428452 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-inventory" (OuterVolumeSpecName: "inventory") pod "2d389772-7902-4aca-8bc3-03a0708fbaa2" (UID: "2d389772-7902-4aca-8bc3-03a0708fbaa2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.429024 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2d389772-7902-4aca-8bc3-03a0708fbaa2" (UID: "2d389772-7902-4aca-8bc3-03a0708fbaa2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.479931 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.480002 5010 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.480046 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.480061 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmtk2\" (UniqueName: \"kubernetes.io/projected/2d389772-7902-4aca-8bc3-03a0708fbaa2-kube-api-access-jmtk2\") on node \"crc\" DevicePath \"\"" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.728745 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" event={"ID":"2d389772-7902-4aca-8bc3-03a0708fbaa2","Type":"ContainerDied","Data":"2ef65aac28dddf89deb7ce485b857019655fec507cad6ee360424ff04f3a20c1"} Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.729177 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ef65aac28dddf89deb7ce485b857019655fec507cad6ee360424ff04f3a20c1" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.728802 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.850657 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs"] Feb 03 10:33:27 crc kubenswrapper[5010]: E0203 10:33:27.851410 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b694c8-ca4a-4c06-9a6a-786e7f8501fc" containerName="extract-utilities" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.851438 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b694c8-ca4a-4c06-9a6a-786e7f8501fc" containerName="extract-utilities" Feb 03 10:33:27 crc kubenswrapper[5010]: E0203 10:33:27.851450 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d389772-7902-4aca-8bc3-03a0708fbaa2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.851459 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d389772-7902-4aca-8bc3-03a0708fbaa2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 03 10:33:27 crc kubenswrapper[5010]: E0203 10:33:27.851471 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d67207-8c20-4786-abde-621b94eada73" containerName="extract-utilities" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.851478 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d67207-8c20-4786-abde-621b94eada73" containerName="extract-utilities" Feb 03 10:33:27 crc kubenswrapper[5010]: E0203 10:33:27.851508 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b694c8-ca4a-4c06-9a6a-786e7f8501fc" containerName="extract-content" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.851516 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b694c8-ca4a-4c06-9a6a-786e7f8501fc" containerName="extract-content" Feb 03 10:33:27 crc kubenswrapper[5010]: E0203 10:33:27.851534 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b694c8-ca4a-4c06-9a6a-786e7f8501fc" containerName="registry-server" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.851541 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b694c8-ca4a-4c06-9a6a-786e7f8501fc" containerName="registry-server" Feb 03 10:33:27 crc kubenswrapper[5010]: E0203 10:33:27.851557 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d67207-8c20-4786-abde-621b94eada73" containerName="extract-content" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.851564 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d67207-8c20-4786-abde-621b94eada73" containerName="extract-content" Feb 03 10:33:27 crc kubenswrapper[5010]: E0203 10:33:27.851577 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d67207-8c20-4786-abde-621b94eada73" containerName="registry-server" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.851583 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d67207-8c20-4786-abde-621b94eada73" containerName="registry-server" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.869655 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="07b694c8-ca4a-4c06-9a6a-786e7f8501fc" containerName="registry-server" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.869712 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2d67207-8c20-4786-abde-621b94eada73" containerName="registry-server" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.869762 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d389772-7902-4aca-8bc3-03a0708fbaa2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.871322 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs"] Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.871474 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.881614 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.883404 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.883602 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.884073 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.997948 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96722ef6-9c22-4700-8163-b25503d014bd-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs\" (UID: \"96722ef6-9c22-4700-8163-b25503d014bd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.998008 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96722ef6-9c22-4700-8163-b25503d014bd-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs\" (UID: \"96722ef6-9c22-4700-8163-b25503d014bd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.998110 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtznz\" (UniqueName: \"kubernetes.io/projected/96722ef6-9c22-4700-8163-b25503d014bd-kube-api-access-xtznz\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs\" (UID: \"96722ef6-9c22-4700-8163-b25503d014bd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:33:28 crc kubenswrapper[5010]: I0203 10:33:28.099951 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtznz\" (UniqueName: \"kubernetes.io/projected/96722ef6-9c22-4700-8163-b25503d014bd-kube-api-access-xtznz\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs\" (UID: \"96722ef6-9c22-4700-8163-b25503d014bd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:33:28 crc kubenswrapper[5010]: I0203 10:33:28.100081 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96722ef6-9c22-4700-8163-b25503d014bd-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs\" (UID: \"96722ef6-9c22-4700-8163-b25503d014bd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:33:28 crc kubenswrapper[5010]: I0203 10:33:28.100109 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96722ef6-9c22-4700-8163-b25503d014bd-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs\" (UID: \"96722ef6-9c22-4700-8163-b25503d014bd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:33:28 crc kubenswrapper[5010]: I0203 10:33:28.109611 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96722ef6-9c22-4700-8163-b25503d014bd-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs\" (UID: \"96722ef6-9c22-4700-8163-b25503d014bd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:33:28 crc kubenswrapper[5010]: I0203 10:33:28.111750 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96722ef6-9c22-4700-8163-b25503d014bd-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs\" (UID: \"96722ef6-9c22-4700-8163-b25503d014bd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:33:28 crc kubenswrapper[5010]: I0203 10:33:28.120894 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtznz\" (UniqueName: \"kubernetes.io/projected/96722ef6-9c22-4700-8163-b25503d014bd-kube-api-access-xtznz\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs\" (UID: \"96722ef6-9c22-4700-8163-b25503d014bd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:33:28 crc kubenswrapper[5010]: I0203 10:33:28.198874 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:33:28 crc kubenswrapper[5010]: I0203 10:33:28.796978 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs"] Feb 03 10:33:29 crc kubenswrapper[5010]: I0203 10:33:29.746760 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" event={"ID":"96722ef6-9c22-4700-8163-b25503d014bd","Type":"ContainerStarted","Data":"fcc55e058fef1ec901480ccc1a34930515b347f1c4dd1ccd9091bdb239759001"} Feb 03 10:33:29 crc kubenswrapper[5010]: I0203 10:33:29.748325 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" event={"ID":"96722ef6-9c22-4700-8163-b25503d014bd","Type":"ContainerStarted","Data":"9581a94b3645ab2ab3a0f1ef5560e2783a192fe6d46b7146f415c304073f83e5"} Feb 03 10:33:29 crc kubenswrapper[5010]: I0203 10:33:29.778648 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" podStartSLOduration=2.280925344 podStartE2EDuration="2.778618586s" podCreationTimestamp="2026-02-03 10:33:27 +0000 UTC" firstStartedPulling="2026-02-03 10:33:28.803182502 +0000 UTC m=+1878.959158631" lastFinishedPulling="2026-02-03 10:33:29.300875744 +0000 UTC m=+1879.456851873" observedRunningTime="2026-02-03 10:33:29.765957411 +0000 UTC m=+1879.921933560" watchObservedRunningTime="2026-02-03 10:33:29.778618586 +0000 UTC m=+1879.934594715" Feb 03 10:33:37 crc kubenswrapper[5010]: I0203 10:33:37.136030 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-9qjk8"] Feb 03 10:33:37 crc kubenswrapper[5010]: I0203 10:33:37.148072 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-nh655"] Feb 03 10:33:37 crc kubenswrapper[5010]: I0203 10:33:37.158789 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-nh655"] Feb 03 10:33:37 crc kubenswrapper[5010]: I0203 10:33:37.174430 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-caa6-account-create-update-69sjp"] Feb 03 10:33:37 crc kubenswrapper[5010]: I0203 10:33:37.184385 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-9qjk8"] Feb 03 10:33:37 crc kubenswrapper[5010]: I0203 10:33:37.194562 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-caa6-account-create-update-69sjp"] Feb 03 10:33:38 crc kubenswrapper[5010]: I0203 10:33:38.037377 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3037-account-create-update-847d2"] Feb 03 10:33:38 crc kubenswrapper[5010]: I0203 10:33:38.047728 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-3037-account-create-update-847d2"] Feb 03 10:33:38 crc kubenswrapper[5010]: I0203 10:33:38.518124 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cf6f6f7-d993-486c-9dcf-63d6b298f898" path="/var/lib/kubelet/pods/7cf6f6f7-d993-486c-9dcf-63d6b298f898/volumes" Feb 03 10:33:38 crc kubenswrapper[5010]: I0203 10:33:38.519261 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a6faff8-cfd9-4253-8dc3-d3df2b3252be" path="/var/lib/kubelet/pods/9a6faff8-cfd9-4253-8dc3-d3df2b3252be/volumes" Feb 03 10:33:38 crc kubenswrapper[5010]: I0203 10:33:38.520232 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e03bfed-c1c6-4165-86c0-6c1415a30081" path="/var/lib/kubelet/pods/9e03bfed-c1c6-4165-86c0-6c1415a30081/volumes" Feb 03 10:33:38 crc kubenswrapper[5010]: I0203 10:33:38.521151 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996" path="/var/lib/kubelet/pods/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996/volumes" Feb 03 10:33:40 crc kubenswrapper[5010]: I0203 10:33:40.049577 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-g8ncl"] Feb 03 10:33:40 crc kubenswrapper[5010]: I0203 10:33:40.062113 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-g8ncl"] Feb 03 10:33:40 crc kubenswrapper[5010]: I0203 10:33:40.517521 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0505d3aa-dab1-4f61-af12-69804ff1345a" path="/var/lib/kubelet/pods/0505d3aa-dab1-4f61-af12-69804ff1345a/volumes" Feb 03 10:33:41 crc kubenswrapper[5010]: I0203 10:33:41.043381 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-06a9-account-create-update-764vb"] Feb 03 10:33:41 crc kubenswrapper[5010]: I0203 10:33:41.056692 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-06a9-account-create-update-764vb"] Feb 03 10:33:42 crc kubenswrapper[5010]: I0203 10:33:42.517753 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2d0be64-0307-43ee-9c2c-905f1d22c267" path="/var/lib/kubelet/pods/e2d0be64-0307-43ee-9c2c-905f1d22c267/volumes" Feb 03 10:34:04 crc kubenswrapper[5010]: I0203 10:34:04.046555 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-742kg"] Feb 03 10:34:04 crc kubenswrapper[5010]: I0203 10:34:04.054016 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-742kg"] Feb 03 10:34:04 crc kubenswrapper[5010]: I0203 10:34:04.513842 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0efd6c3-d0dc-4ebc-a116-d7e811177fa6" path="/var/lib/kubelet/pods/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6/volumes" Feb 03 10:34:15 crc kubenswrapper[5010]: I0203 10:34:15.045541 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-z7nxm"] Feb 03 10:34:15 crc kubenswrapper[5010]: I0203 10:34:15.056619 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-f06e-account-create-update-glqr6"] Feb 03 10:34:15 crc kubenswrapper[5010]: I0203 10:34:15.065647 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-54zjm"] Feb 03 10:34:15 crc kubenswrapper[5010]: I0203 10:34:15.076909 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-5fk6k"] Feb 03 10:34:15 crc kubenswrapper[5010]: I0203 10:34:15.086660 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-5fk6k"] Feb 03 10:34:15 crc kubenswrapper[5010]: I0203 10:34:15.100054 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-z7nxm"] Feb 03 10:34:15 crc kubenswrapper[5010]: I0203 10:34:15.112099 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-54zjm"] Feb 03 10:34:15 crc kubenswrapper[5010]: I0203 10:34:15.124376 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-f06e-account-create-update-glqr6"] Feb 03 10:34:16 crc kubenswrapper[5010]: I0203 10:34:16.040385 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5102-account-create-update-nv7jr"] Feb 03 10:34:16 crc kubenswrapper[5010]: I0203 10:34:16.053999 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5b83-account-create-update-hrlzs"] Feb 03 10:34:16 crc kubenswrapper[5010]: I0203 10:34:16.068098 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5102-account-create-update-nv7jr"] Feb 03 10:34:16 crc kubenswrapper[5010]: I0203 10:34:16.083401 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-5b83-account-create-update-hrlzs"] Feb 03 10:34:16 crc kubenswrapper[5010]: I0203 10:34:16.519009 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c5b7adb-c7e4-4014-b37f-674861868979" path="/var/lib/kubelet/pods/1c5b7adb-c7e4-4014-b37f-674861868979/volumes" Feb 03 10:34:16 crc kubenswrapper[5010]: I0203 10:34:16.520527 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8144e4b8-89a7-4c08-86b9-219ea9d4645c" path="/var/lib/kubelet/pods/8144e4b8-89a7-4c08-86b9-219ea9d4645c/volumes" Feb 03 10:34:16 crc kubenswrapper[5010]: I0203 10:34:16.521438 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83561b9b-ec1d-4ef5-bb05-48780834e40d" path="/var/lib/kubelet/pods/83561b9b-ec1d-4ef5-bb05-48780834e40d/volumes" Feb 03 10:34:16 crc kubenswrapper[5010]: I0203 10:34:16.522536 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90501abd-ab27-4c54-bd38-239e5803689b" path="/var/lib/kubelet/pods/90501abd-ab27-4c54-bd38-239e5803689b/volumes" Feb 03 10:34:16 crc kubenswrapper[5010]: I0203 10:34:16.524512 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c0e1d98-9045-4a70-8021-ac7dcf843775" path="/var/lib/kubelet/pods/9c0e1d98-9045-4a70-8021-ac7dcf843775/volumes" Feb 03 10:34:16 crc kubenswrapper[5010]: I0203 10:34:16.525677 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fce7685e-8301-4c02-8e1b-386646d84264" path="/var/lib/kubelet/pods/fce7685e-8301-4c02-8e1b-386646d84264/volumes" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.237177 5010 scope.go:117] "RemoveContainer" containerID="ecc37d219487243243570207ff635b3c963683b6d23c8e89c6a83dba41ce9ef2" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.277801 5010 scope.go:117] "RemoveContainer" containerID="867e48e65d90b62aadc6ddb63e004c04adf8450508e9b1413072265967186694" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.332792 5010 scope.go:117] "RemoveContainer" containerID="5fd86f16e791f88f37d27cd6030a471785bd1ebc82355253888f61f74084bc56" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.399891 5010 scope.go:117] "RemoveContainer" containerID="5e4e86c382f25cd8e9bad9e5d4a055df36fab11bdb33c4c29ebe01bd4ab0d270" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.463957 5010 scope.go:117] "RemoveContainer" containerID="7faf76a4eb10f7d724f9bd83b1eb96f06a13d0bd092d0ededd050f56a18268b5" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.515552 5010 scope.go:117] "RemoveContainer" containerID="783df9142821b00a27f64292c3e26d0dec1e72fe32175024883cc3eb71e60b8b" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.568988 5010 scope.go:117] "RemoveContainer" containerID="02a4a1176b9659935ba9d5084dc9f0a979b3bf3765756a868a98c381f2e4df2c" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.610121 5010 scope.go:117] "RemoveContainer" containerID="175dd1c77e9a4d7de137280af274a9e26cedb6a12f8e491f927188b800875447" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.639838 5010 scope.go:117] "RemoveContainer" containerID="b8b094bb4a4489910ae853a898b2603c46e5923639a21e30a68a2dca1eee68b8" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.665582 5010 scope.go:117] "RemoveContainer" containerID="6a575e19d1e33cee77eb78ea1b934b59f477f565a39712db7cebceb61e00a60f" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.693829 5010 scope.go:117] "RemoveContainer" containerID="e98e811059a9c2d02f4a30baf36100191798d1770e183f8268ccff78ece3d154" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.721909 5010 scope.go:117] "RemoveContainer" containerID="5168c22750de205db4c3cef2742987a3feeb1460c92bf43dadf92987bcb6f04e" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.747579 5010 scope.go:117] "RemoveContainer" containerID="ea0bf3943fa2c4dbc35b90869ad8099512a31ad225b933cd4437ed8cc1770bf0" Feb 03 10:34:32 crc kubenswrapper[5010]: I0203 10:34:32.060414 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-b8wjx"] Feb 03 10:34:32 crc kubenswrapper[5010]: I0203 10:34:32.069321 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-b8wjx"] Feb 03 10:34:32 crc kubenswrapper[5010]: I0203 10:34:32.519629 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a81f0078-44e5-4bbc-82ce-3d648e2e32db" path="/var/lib/kubelet/pods/a81f0078-44e5-4bbc-82ce-3d648e2e32db/volumes" Feb 03 10:34:41 crc kubenswrapper[5010]: I0203 10:34:41.039824 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-xlhhb"] Feb 03 10:34:41 crc kubenswrapper[5010]: I0203 10:34:41.053393 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-xlhhb"] Feb 03 10:34:42 crc kubenswrapper[5010]: I0203 10:34:42.519449 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3" path="/var/lib/kubelet/pods/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3/volumes" Feb 03 10:35:10 crc kubenswrapper[5010]: I0203 10:35:10.062742 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-mvrf4"] Feb 03 10:35:10 crc kubenswrapper[5010]: I0203 10:35:10.087139 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-mvrf4"] Feb 03 10:35:10 crc kubenswrapper[5010]: I0203 10:35:10.520616 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c2a4fab-65d6-47ac-9829-2b5b5e8d412c" path="/var/lib/kubelet/pods/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c/volumes" Feb 03 10:35:19 crc kubenswrapper[5010]: I0203 10:35:19.054496 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-tptfc"] Feb 03 10:35:19 crc kubenswrapper[5010]: I0203 10:35:19.067485 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-tptfc"] Feb 03 10:35:20 crc kubenswrapper[5010]: I0203 10:35:20.517376 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29ef610c-3c09-4b27-9b97-3a5350388caa" path="/var/lib/kubelet/pods/29ef610c-3c09-4b27-9b97-3a5350388caa/volumes" Feb 03 10:35:21 crc kubenswrapper[5010]: I0203 10:35:21.087362 5010 scope.go:117] "RemoveContainer" containerID="3e8d95734ac813f12b8b00d5738e5d5d21869fee2e05c53312641bbb6e639906" Feb 03 10:35:21 crc kubenswrapper[5010]: I0203 10:35:21.160444 5010 scope.go:117] "RemoveContainer" containerID="c2c236cbcbee82d440a00402bffa84360077e085e5045869a24060dbc0c3411c" Feb 03 10:35:21 crc kubenswrapper[5010]: I0203 10:35:21.226832 5010 scope.go:117] "RemoveContainer" containerID="9f5dffa42b9c5fba57b57a1ca0e358ff317d50df295683f9bc9e42abb84b1b81" Feb 03 10:35:21 crc kubenswrapper[5010]: I0203 10:35:21.269084 5010 scope.go:117] "RemoveContainer" containerID="2f477c6764bb977e8cc3e17e43a92a85fa737e9bdd4ffa07901f030c855e03b4" Feb 03 10:35:22 crc kubenswrapper[5010]: I0203 10:35:22.061446 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-swx9t"] Feb 03 10:35:22 crc kubenswrapper[5010]: I0203 10:35:22.071525 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-swx9t"] Feb 03 10:35:22 crc kubenswrapper[5010]: I0203 10:35:22.520401 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="457510b3-7c5a-456d-9df3-54fa7dee8c4b" path="/var/lib/kubelet/pods/457510b3-7c5a-456d-9df3-54fa7dee8c4b/volumes" Feb 03 10:35:23 crc kubenswrapper[5010]: I0203 10:35:23.102068 5010 generic.go:334] "Generic (PLEG): container finished" podID="96722ef6-9c22-4700-8163-b25503d014bd" containerID="fcc55e058fef1ec901480ccc1a34930515b347f1c4dd1ccd9091bdb239759001" exitCode=0 Feb 03 10:35:23 crc kubenswrapper[5010]: I0203 10:35:23.102140 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" event={"ID":"96722ef6-9c22-4700-8163-b25503d014bd","Type":"ContainerDied","Data":"fcc55e058fef1ec901480ccc1a34930515b347f1c4dd1ccd9091bdb239759001"} Feb 03 10:35:24 crc kubenswrapper[5010]: I0203 10:35:24.772632 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:35:24 crc kubenswrapper[5010]: I0203 10:35:24.907374 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96722ef6-9c22-4700-8163-b25503d014bd-ssh-key-openstack-edpm-ipam\") pod \"96722ef6-9c22-4700-8163-b25503d014bd\" (UID: \"96722ef6-9c22-4700-8163-b25503d014bd\") " Feb 03 10:35:24 crc kubenswrapper[5010]: I0203 10:35:24.907730 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96722ef6-9c22-4700-8163-b25503d014bd-inventory\") pod \"96722ef6-9c22-4700-8163-b25503d014bd\" (UID: \"96722ef6-9c22-4700-8163-b25503d014bd\") " Feb 03 10:35:24 crc kubenswrapper[5010]: I0203 10:35:24.907826 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtznz\" (UniqueName: \"kubernetes.io/projected/96722ef6-9c22-4700-8163-b25503d014bd-kube-api-access-xtznz\") pod \"96722ef6-9c22-4700-8163-b25503d014bd\" (UID: \"96722ef6-9c22-4700-8163-b25503d014bd\") " Feb 03 10:35:24 crc kubenswrapper[5010]: I0203 10:35:24.918977 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96722ef6-9c22-4700-8163-b25503d014bd-kube-api-access-xtznz" (OuterVolumeSpecName: "kube-api-access-xtznz") pod "96722ef6-9c22-4700-8163-b25503d014bd" (UID: "96722ef6-9c22-4700-8163-b25503d014bd"). InnerVolumeSpecName "kube-api-access-xtznz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:35:24 crc kubenswrapper[5010]: I0203 10:35:24.946555 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96722ef6-9c22-4700-8163-b25503d014bd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "96722ef6-9c22-4700-8163-b25503d014bd" (UID: "96722ef6-9c22-4700-8163-b25503d014bd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:35:24 crc kubenswrapper[5010]: I0203 10:35:24.948383 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96722ef6-9c22-4700-8163-b25503d014bd-inventory" (OuterVolumeSpecName: "inventory") pod "96722ef6-9c22-4700-8163-b25503d014bd" (UID: "96722ef6-9c22-4700-8163-b25503d014bd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.012085 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96722ef6-9c22-4700-8163-b25503d014bd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.012184 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96722ef6-9c22-4700-8163-b25503d014bd-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.012199 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtznz\" (UniqueName: \"kubernetes.io/projected/96722ef6-9c22-4700-8163-b25503d014bd-kube-api-access-xtznz\") on node \"crc\" DevicePath \"\"" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.133501 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" event={"ID":"96722ef6-9c22-4700-8163-b25503d014bd","Type":"ContainerDied","Data":"9581a94b3645ab2ab3a0f1ef5560e2783a192fe6d46b7146f415c304073f83e5"} Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.133558 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9581a94b3645ab2ab3a0f1ef5560e2783a192fe6d46b7146f415c304073f83e5" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.133633 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.238032 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc"] Feb 03 10:35:25 crc kubenswrapper[5010]: E0203 10:35:25.238834 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96722ef6-9c22-4700-8163-b25503d014bd" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.238867 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="96722ef6-9c22-4700-8163-b25503d014bd" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.239142 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="96722ef6-9c22-4700-8163-b25503d014bd" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.240287 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.243772 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.244184 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.247422 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.248997 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.252656 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc"] Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.420722 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efb76028-3500-476c-adef-dfc87d2cdab7-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5tffc\" (UID: \"efb76028-3500-476c-adef-dfc87d2cdab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.420796 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efb76028-3500-476c-adef-dfc87d2cdab7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5tffc\" (UID: \"efb76028-3500-476c-adef-dfc87d2cdab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.421184 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd7qc\" (UniqueName: \"kubernetes.io/projected/efb76028-3500-476c-adef-dfc87d2cdab7-kube-api-access-kd7qc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5tffc\" (UID: \"efb76028-3500-476c-adef-dfc87d2cdab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.523355 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd7qc\" (UniqueName: \"kubernetes.io/projected/efb76028-3500-476c-adef-dfc87d2cdab7-kube-api-access-kd7qc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5tffc\" (UID: \"efb76028-3500-476c-adef-dfc87d2cdab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.523593 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efb76028-3500-476c-adef-dfc87d2cdab7-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5tffc\" (UID: \"efb76028-3500-476c-adef-dfc87d2cdab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.523647 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efb76028-3500-476c-adef-dfc87d2cdab7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5tffc\" (UID: \"efb76028-3500-476c-adef-dfc87d2cdab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.528667 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efb76028-3500-476c-adef-dfc87d2cdab7-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5tffc\" (UID: \"efb76028-3500-476c-adef-dfc87d2cdab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.535297 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efb76028-3500-476c-adef-dfc87d2cdab7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5tffc\" (UID: \"efb76028-3500-476c-adef-dfc87d2cdab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.550583 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd7qc\" (UniqueName: \"kubernetes.io/projected/efb76028-3500-476c-adef-dfc87d2cdab7-kube-api-access-kd7qc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5tffc\" (UID: \"efb76028-3500-476c-adef-dfc87d2cdab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.563559 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:35:26 crc kubenswrapper[5010]: I0203 10:35:26.159186 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc"] Feb 03 10:35:27 crc kubenswrapper[5010]: I0203 10:35:27.160445 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" event={"ID":"efb76028-3500-476c-adef-dfc87d2cdab7","Type":"ContainerStarted","Data":"a4c375690fa1ec40eef647be11edc8538fbedd2b8d427496a33c1527d4387b78"} Feb 03 10:35:28 crc kubenswrapper[5010]: I0203 10:35:28.176686 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" event={"ID":"efb76028-3500-476c-adef-dfc87d2cdab7","Type":"ContainerStarted","Data":"a19b497c7c28c9ee6e75c3ef4fc8cf01ad5e203dac29a52316b01db981be31af"} Feb 03 10:35:28 crc kubenswrapper[5010]: I0203 10:35:28.210360 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" podStartSLOduration=1.597371485 podStartE2EDuration="3.210330998s" podCreationTimestamp="2026-02-03 10:35:25 +0000 UTC" firstStartedPulling="2026-02-03 10:35:26.163392155 +0000 UTC m=+1996.319368284" lastFinishedPulling="2026-02-03 10:35:27.776351668 +0000 UTC m=+1997.932327797" observedRunningTime="2026-02-03 10:35:28.205975106 +0000 UTC m=+1998.361951235" watchObservedRunningTime="2026-02-03 10:35:28.210330998 +0000 UTC m=+1998.366307127" Feb 03 10:35:31 crc kubenswrapper[5010]: I0203 10:35:31.049113 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-g6tdx"] Feb 03 10:35:31 crc kubenswrapper[5010]: I0203 10:35:31.056976 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-g6tdx"] Feb 03 10:35:32 crc kubenswrapper[5010]: I0203 10:35:32.517622 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bad34e68-b20a-486c-b06b-e19f5aaaf917" path="/var/lib/kubelet/pods/bad34e68-b20a-486c-b06b-e19f5aaaf917/volumes" Feb 03 10:35:39 crc kubenswrapper[5010]: I0203 10:35:39.036003 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-b9wwp"] Feb 03 10:35:39 crc kubenswrapper[5010]: I0203 10:35:39.048817 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-b9wwp"] Feb 03 10:35:40 crc kubenswrapper[5010]: I0203 10:35:40.519759 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1acc33e7-f3ae-4131-a003-aa6b592269c6" path="/var/lib/kubelet/pods/1acc33e7-f3ae-4131-a003-aa6b592269c6/volumes" Feb 03 10:35:46 crc kubenswrapper[5010]: I0203 10:35:46.390644 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:35:46 crc kubenswrapper[5010]: I0203 10:35:46.392024 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:36:05 crc kubenswrapper[5010]: I0203 10:36:05.994206 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9jqtw"] Feb 03 10:36:05 crc kubenswrapper[5010]: I0203 10:36:05.999053 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:06 crc kubenswrapper[5010]: I0203 10:36:06.013568 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9jqtw"] Feb 03 10:36:06 crc kubenswrapper[5010]: I0203 10:36:06.086691 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-utilities\") pod \"certified-operators-9jqtw\" (UID: \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\") " pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:06 crc kubenswrapper[5010]: I0203 10:36:06.086796 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlnlr\" (UniqueName: \"kubernetes.io/projected/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-kube-api-access-jlnlr\") pod \"certified-operators-9jqtw\" (UID: \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\") " pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:06 crc kubenswrapper[5010]: I0203 10:36:06.086967 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-catalog-content\") pod \"certified-operators-9jqtw\" (UID: \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\") " pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:06 crc kubenswrapper[5010]: I0203 10:36:06.190496 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-utilities\") pod \"certified-operators-9jqtw\" (UID: \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\") " pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:06 crc kubenswrapper[5010]: I0203 10:36:06.190620 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlnlr\" (UniqueName: \"kubernetes.io/projected/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-kube-api-access-jlnlr\") pod \"certified-operators-9jqtw\" (UID: \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\") " pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:06 crc kubenswrapper[5010]: I0203 10:36:06.190765 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-catalog-content\") pod \"certified-operators-9jqtw\" (UID: \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\") " pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:06 crc kubenswrapper[5010]: I0203 10:36:06.191277 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-utilities\") pod \"certified-operators-9jqtw\" (UID: \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\") " pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:06 crc kubenswrapper[5010]: I0203 10:36:06.191542 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-catalog-content\") pod \"certified-operators-9jqtw\" (UID: \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\") " pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:06 crc kubenswrapper[5010]: I0203 10:36:06.218407 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlnlr\" (UniqueName: \"kubernetes.io/projected/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-kube-api-access-jlnlr\") pod \"certified-operators-9jqtw\" (UID: \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\") " pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:06 crc kubenswrapper[5010]: I0203 10:36:06.327313 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:06 crc kubenswrapper[5010]: I0203 10:36:06.980774 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9jqtw"] Feb 03 10:36:07 crc kubenswrapper[5010]: I0203 10:36:07.855310 5010 generic.go:334] "Generic (PLEG): container finished" podID="de98348e-d7aa-4a70-ba6f-8fbe414be6e4" containerID="268b25785e08a14766b846b60aaaca34bd6ab51f32a96303638926cb78db2ee4" exitCode=0 Feb 03 10:36:07 crc kubenswrapper[5010]: I0203 10:36:07.855394 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jqtw" event={"ID":"de98348e-d7aa-4a70-ba6f-8fbe414be6e4","Type":"ContainerDied","Data":"268b25785e08a14766b846b60aaaca34bd6ab51f32a96303638926cb78db2ee4"} Feb 03 10:36:07 crc kubenswrapper[5010]: I0203 10:36:07.855439 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jqtw" event={"ID":"de98348e-d7aa-4a70-ba6f-8fbe414be6e4","Type":"ContainerStarted","Data":"30462e26e895913aeae7a24f7294d049662d3489ceed1084bbd282871696eac4"} Feb 03 10:36:07 crc kubenswrapper[5010]: I0203 10:36:07.858984 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 10:36:09 crc kubenswrapper[5010]: I0203 10:36:09.880976 5010 generic.go:334] "Generic (PLEG): container finished" podID="de98348e-d7aa-4a70-ba6f-8fbe414be6e4" containerID="9856b4a8ab6cd5521f3ecadd2c6de5ebc5f1bca491ed9a2f1088a081b22be4f0" exitCode=0 Feb 03 10:36:09 crc kubenswrapper[5010]: I0203 10:36:09.881078 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jqtw" event={"ID":"de98348e-d7aa-4a70-ba6f-8fbe414be6e4","Type":"ContainerDied","Data":"9856b4a8ab6cd5521f3ecadd2c6de5ebc5f1bca491ed9a2f1088a081b22be4f0"} Feb 03 10:36:10 crc kubenswrapper[5010]: I0203 10:36:10.896034 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jqtw" event={"ID":"de98348e-d7aa-4a70-ba6f-8fbe414be6e4","Type":"ContainerStarted","Data":"fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952"} Feb 03 10:36:10 crc kubenswrapper[5010]: I0203 10:36:10.924246 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9jqtw" podStartSLOduration=3.433578863 podStartE2EDuration="5.924190314s" podCreationTimestamp="2026-02-03 10:36:05 +0000 UTC" firstStartedPulling="2026-02-03 10:36:07.858563116 +0000 UTC m=+2038.014539245" lastFinishedPulling="2026-02-03 10:36:10.349174567 +0000 UTC m=+2040.505150696" observedRunningTime="2026-02-03 10:36:10.915981753 +0000 UTC m=+2041.071957892" watchObservedRunningTime="2026-02-03 10:36:10.924190314 +0000 UTC m=+2041.080166453" Feb 03 10:36:16 crc kubenswrapper[5010]: I0203 10:36:16.329006 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:16 crc kubenswrapper[5010]: I0203 10:36:16.329911 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:16 crc kubenswrapper[5010]: I0203 10:36:16.390283 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:16 crc kubenswrapper[5010]: I0203 10:36:16.390554 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:36:16 crc kubenswrapper[5010]: I0203 10:36:16.390608 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:36:16 crc kubenswrapper[5010]: I0203 10:36:16.808791 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:16 crc kubenswrapper[5010]: I0203 10:36:16.867867 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9jqtw"] Feb 03 10:36:18 crc kubenswrapper[5010]: I0203 10:36:18.813852 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9jqtw" podUID="de98348e-d7aa-4a70-ba6f-8fbe414be6e4" containerName="registry-server" containerID="cri-o://fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952" gracePeriod=2 Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.358339 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.521053 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-utilities\") pod \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\" (UID: \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\") " Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.521292 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-catalog-content\") pod \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\" (UID: \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\") " Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.521431 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlnlr\" (UniqueName: \"kubernetes.io/projected/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-kube-api-access-jlnlr\") pod \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\" (UID: \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\") " Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.522562 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-utilities" (OuterVolumeSpecName: "utilities") pod "de98348e-d7aa-4a70-ba6f-8fbe414be6e4" (UID: "de98348e-d7aa-4a70-ba6f-8fbe414be6e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.537693 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-kube-api-access-jlnlr" (OuterVolumeSpecName: "kube-api-access-jlnlr") pod "de98348e-d7aa-4a70-ba6f-8fbe414be6e4" (UID: "de98348e-d7aa-4a70-ba6f-8fbe414be6e4"). InnerVolumeSpecName "kube-api-access-jlnlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.584205 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de98348e-d7aa-4a70-ba6f-8fbe414be6e4" (UID: "de98348e-d7aa-4a70-ba6f-8fbe414be6e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.624475 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.624529 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.624545 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlnlr\" (UniqueName: \"kubernetes.io/projected/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-kube-api-access-jlnlr\") on node \"crc\" DevicePath \"\"" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.828530 5010 generic.go:334] "Generic (PLEG): container finished" podID="de98348e-d7aa-4a70-ba6f-8fbe414be6e4" containerID="fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952" exitCode=0 Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.828601 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jqtw" event={"ID":"de98348e-d7aa-4a70-ba6f-8fbe414be6e4","Type":"ContainerDied","Data":"fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952"} Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.828619 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.828642 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jqtw" event={"ID":"de98348e-d7aa-4a70-ba6f-8fbe414be6e4","Type":"ContainerDied","Data":"30462e26e895913aeae7a24f7294d049662d3489ceed1084bbd282871696eac4"} Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.828666 5010 scope.go:117] "RemoveContainer" containerID="fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.873201 5010 scope.go:117] "RemoveContainer" containerID="9856b4a8ab6cd5521f3ecadd2c6de5ebc5f1bca491ed9a2f1088a081b22be4f0" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.878897 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9jqtw"] Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.889362 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9jqtw"] Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.902122 5010 scope.go:117] "RemoveContainer" containerID="268b25785e08a14766b846b60aaaca34bd6ab51f32a96303638926cb78db2ee4" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.962899 5010 scope.go:117] "RemoveContainer" containerID="fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952" Feb 03 10:36:19 crc kubenswrapper[5010]: E0203 10:36:19.964479 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952\": container with ID starting with fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952 not found: ID does not exist" containerID="fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.964544 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952"} err="failed to get container status \"fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952\": rpc error: code = NotFound desc = could not find container \"fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952\": container with ID starting with fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952 not found: ID does not exist" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.964580 5010 scope.go:117] "RemoveContainer" containerID="9856b4a8ab6cd5521f3ecadd2c6de5ebc5f1bca491ed9a2f1088a081b22be4f0" Feb 03 10:36:19 crc kubenswrapper[5010]: E0203 10:36:19.965321 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9856b4a8ab6cd5521f3ecadd2c6de5ebc5f1bca491ed9a2f1088a081b22be4f0\": container with ID starting with 9856b4a8ab6cd5521f3ecadd2c6de5ebc5f1bca491ed9a2f1088a081b22be4f0 not found: ID does not exist" containerID="9856b4a8ab6cd5521f3ecadd2c6de5ebc5f1bca491ed9a2f1088a081b22be4f0" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.965363 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9856b4a8ab6cd5521f3ecadd2c6de5ebc5f1bca491ed9a2f1088a081b22be4f0"} err="failed to get container status \"9856b4a8ab6cd5521f3ecadd2c6de5ebc5f1bca491ed9a2f1088a081b22be4f0\": rpc error: code = NotFound desc = could not find container \"9856b4a8ab6cd5521f3ecadd2c6de5ebc5f1bca491ed9a2f1088a081b22be4f0\": container with ID starting with 9856b4a8ab6cd5521f3ecadd2c6de5ebc5f1bca491ed9a2f1088a081b22be4f0 not found: ID does not exist" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.965389 5010 scope.go:117] "RemoveContainer" containerID="268b25785e08a14766b846b60aaaca34bd6ab51f32a96303638926cb78db2ee4" Feb 03 10:36:19 crc kubenswrapper[5010]: E0203 10:36:19.966053 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"268b25785e08a14766b846b60aaaca34bd6ab51f32a96303638926cb78db2ee4\": container with ID starting with 268b25785e08a14766b846b60aaaca34bd6ab51f32a96303638926cb78db2ee4 not found: ID does not exist" containerID="268b25785e08a14766b846b60aaaca34bd6ab51f32a96303638926cb78db2ee4" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.966087 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"268b25785e08a14766b846b60aaaca34bd6ab51f32a96303638926cb78db2ee4"} err="failed to get container status \"268b25785e08a14766b846b60aaaca34bd6ab51f32a96303638926cb78db2ee4\": rpc error: code = NotFound desc = could not find container \"268b25785e08a14766b846b60aaaca34bd6ab51f32a96303638926cb78db2ee4\": container with ID starting with 268b25785e08a14766b846b60aaaca34bd6ab51f32a96303638926cb78db2ee4 not found: ID does not exist" Feb 03 10:36:20 crc kubenswrapper[5010]: I0203 10:36:20.515136 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de98348e-d7aa-4a70-ba6f-8fbe414be6e4" path="/var/lib/kubelet/pods/de98348e-d7aa-4a70-ba6f-8fbe414be6e4/volumes" Feb 03 10:36:21 crc kubenswrapper[5010]: I0203 10:36:21.453172 5010 scope.go:117] "RemoveContainer" containerID="90f279a47e6694b954d6224d0a36d83bb292142a861407bbd952b7ac0f3f1940" Feb 03 10:36:21 crc kubenswrapper[5010]: I0203 10:36:21.523752 5010 scope.go:117] "RemoveContainer" containerID="56c4bc07b47d992164c95f2c4bc219b10e3ec8444d085ea923e9fc23515c64b1" Feb 03 10:36:21 crc kubenswrapper[5010]: I0203 10:36:21.569637 5010 scope.go:117] "RemoveContainer" containerID="eec510d597d8f2314ae76e8de6136bb5224447e6e83068a025a8dfed4080a04f" Feb 03 10:36:29 crc kubenswrapper[5010]: I0203 10:36:29.090694 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-d58b-account-create-update-p69h5"] Feb 03 10:36:29 crc kubenswrapper[5010]: I0203 10:36:29.109531 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-qnsrk"] Feb 03 10:36:29 crc kubenswrapper[5010]: I0203 10:36:29.121713 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-46aa-account-create-update-5gs9h"] Feb 03 10:36:29 crc kubenswrapper[5010]: I0203 10:36:29.132923 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-fztcs"] Feb 03 10:36:29 crc kubenswrapper[5010]: I0203 10:36:29.144261 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-dq6kw"] Feb 03 10:36:29 crc kubenswrapper[5010]: I0203 10:36:29.157462 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-c6bf-account-create-update-9xrwr"] Feb 03 10:36:29 crc kubenswrapper[5010]: I0203 10:36:29.170157 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-46aa-account-create-update-5gs9h"] Feb 03 10:36:29 crc kubenswrapper[5010]: I0203 10:36:29.183252 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-qnsrk"] Feb 03 10:36:29 crc kubenswrapper[5010]: I0203 10:36:29.192172 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-dq6kw"] Feb 03 10:36:29 crc kubenswrapper[5010]: I0203 10:36:29.203490 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-d58b-account-create-update-p69h5"] Feb 03 10:36:29 crc kubenswrapper[5010]: I0203 10:36:29.213601 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-fztcs"] Feb 03 10:36:29 crc kubenswrapper[5010]: I0203 10:36:29.222656 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-c6bf-account-create-update-9xrwr"] Feb 03 10:36:30 crc kubenswrapper[5010]: I0203 10:36:30.534529 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="122231ac-5000-44d7-a524-2df85da0abd4" path="/var/lib/kubelet/pods/122231ac-5000-44d7-a524-2df85da0abd4/volumes" Feb 03 10:36:30 crc kubenswrapper[5010]: I0203 10:36:30.536398 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19aa5f54-6733-454e-a1cf-92ba62fc4068" path="/var/lib/kubelet/pods/19aa5f54-6733-454e-a1cf-92ba62fc4068/volumes" Feb 03 10:36:30 crc kubenswrapper[5010]: I0203 10:36:30.537175 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26fff59b-fc6c-46b2-9cb6-9ad352b4e39c" path="/var/lib/kubelet/pods/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c/volumes" Feb 03 10:36:30 crc kubenswrapper[5010]: I0203 10:36:30.538045 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="307672c5-ae66-4af2-bbbb-1a59c58ee4b2" path="/var/lib/kubelet/pods/307672c5-ae66-4af2-bbbb-1a59c58ee4b2/volumes" Feb 03 10:36:30 crc kubenswrapper[5010]: I0203 10:36:30.541146 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fac5d19-4577-4190-b626-83d0b42fd46d" path="/var/lib/kubelet/pods/6fac5d19-4577-4190-b626-83d0b42fd46d/volumes" Feb 03 10:36:30 crc kubenswrapper[5010]: I0203 10:36:30.542446 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cab88b93-9009-49d9-8967-dc8f2b9a7244" path="/var/lib/kubelet/pods/cab88b93-9009-49d9-8967-dc8f2b9a7244/volumes" Feb 03 10:36:34 crc kubenswrapper[5010]: I0203 10:36:34.710682 5010 generic.go:334] "Generic (PLEG): container finished" podID="efb76028-3500-476c-adef-dfc87d2cdab7" containerID="a19b497c7c28c9ee6e75c3ef4fc8cf01ad5e203dac29a52316b01db981be31af" exitCode=0 Feb 03 10:36:34 crc kubenswrapper[5010]: I0203 10:36:34.711371 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" event={"ID":"efb76028-3500-476c-adef-dfc87d2cdab7","Type":"ContainerDied","Data":"a19b497c7c28c9ee6e75c3ef4fc8cf01ad5e203dac29a52316b01db981be31af"} Feb 03 10:36:36 crc kubenswrapper[5010]: I0203 10:36:36.735291 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" event={"ID":"efb76028-3500-476c-adef-dfc87d2cdab7","Type":"ContainerDied","Data":"a4c375690fa1ec40eef647be11edc8538fbedd2b8d427496a33c1527d4387b78"} Feb 03 10:36:36 crc kubenswrapper[5010]: I0203 10:36:36.737157 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4c375690fa1ec40eef647be11edc8538fbedd2b8d427496a33c1527d4387b78" Feb 03 10:36:36 crc kubenswrapper[5010]: I0203 10:36:36.845873 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:36:36 crc kubenswrapper[5010]: I0203 10:36:36.880203 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efb76028-3500-476c-adef-dfc87d2cdab7-ssh-key-openstack-edpm-ipam\") pod \"efb76028-3500-476c-adef-dfc87d2cdab7\" (UID: \"efb76028-3500-476c-adef-dfc87d2cdab7\") " Feb 03 10:36:36 crc kubenswrapper[5010]: I0203 10:36:36.880313 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efb76028-3500-476c-adef-dfc87d2cdab7-inventory\") pod \"efb76028-3500-476c-adef-dfc87d2cdab7\" (UID: \"efb76028-3500-476c-adef-dfc87d2cdab7\") " Feb 03 10:36:36 crc kubenswrapper[5010]: I0203 10:36:36.880467 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd7qc\" (UniqueName: \"kubernetes.io/projected/efb76028-3500-476c-adef-dfc87d2cdab7-kube-api-access-kd7qc\") pod \"efb76028-3500-476c-adef-dfc87d2cdab7\" (UID: \"efb76028-3500-476c-adef-dfc87d2cdab7\") " Feb 03 10:36:36 crc kubenswrapper[5010]: I0203 10:36:36.901759 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efb76028-3500-476c-adef-dfc87d2cdab7-kube-api-access-kd7qc" (OuterVolumeSpecName: "kube-api-access-kd7qc") pod "efb76028-3500-476c-adef-dfc87d2cdab7" (UID: "efb76028-3500-476c-adef-dfc87d2cdab7"). InnerVolumeSpecName "kube-api-access-kd7qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:36:36 crc kubenswrapper[5010]: I0203 10:36:36.913708 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efb76028-3500-476c-adef-dfc87d2cdab7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "efb76028-3500-476c-adef-dfc87d2cdab7" (UID: "efb76028-3500-476c-adef-dfc87d2cdab7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:36:36 crc kubenswrapper[5010]: I0203 10:36:36.940923 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efb76028-3500-476c-adef-dfc87d2cdab7-inventory" (OuterVolumeSpecName: "inventory") pod "efb76028-3500-476c-adef-dfc87d2cdab7" (UID: "efb76028-3500-476c-adef-dfc87d2cdab7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:36:36 crc kubenswrapper[5010]: I0203 10:36:36.983264 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efb76028-3500-476c-adef-dfc87d2cdab7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:36:36 crc kubenswrapper[5010]: I0203 10:36:36.983323 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efb76028-3500-476c-adef-dfc87d2cdab7-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:36:36 crc kubenswrapper[5010]: I0203 10:36:36.983339 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd7qc\" (UniqueName: \"kubernetes.io/projected/efb76028-3500-476c-adef-dfc87d2cdab7-kube-api-access-kd7qc\") on node \"crc\" DevicePath \"\"" Feb 03 10:36:37 crc kubenswrapper[5010]: I0203 10:36:37.746475 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.324720 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7"] Feb 03 10:36:38 crc kubenswrapper[5010]: E0203 10:36:38.325820 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de98348e-d7aa-4a70-ba6f-8fbe414be6e4" containerName="registry-server" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.325917 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="de98348e-d7aa-4a70-ba6f-8fbe414be6e4" containerName="registry-server" Feb 03 10:36:38 crc kubenswrapper[5010]: E0203 10:36:38.326019 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de98348e-d7aa-4a70-ba6f-8fbe414be6e4" containerName="extract-utilities" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.326076 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="de98348e-d7aa-4a70-ba6f-8fbe414be6e4" containerName="extract-utilities" Feb 03 10:36:38 crc kubenswrapper[5010]: E0203 10:36:38.326134 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de98348e-d7aa-4a70-ba6f-8fbe414be6e4" containerName="extract-content" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.326184 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="de98348e-d7aa-4a70-ba6f-8fbe414be6e4" containerName="extract-content" Feb 03 10:36:38 crc kubenswrapper[5010]: E0203 10:36:38.326269 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efb76028-3500-476c-adef-dfc87d2cdab7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.326336 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="efb76028-3500-476c-adef-dfc87d2cdab7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.326660 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="efb76028-3500-476c-adef-dfc87d2cdab7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.326765 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="de98348e-d7aa-4a70-ba6f-8fbe414be6e4" containerName="registry-server" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.327850 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.332513 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.332572 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.332829 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.332935 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.356099 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7"] Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.404759 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jlcb\" (UniqueName: \"kubernetes.io/projected/3109739d-69b7-439a-b6c4-a8affbe0af4f-kube-api-access-5jlcb\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7\" (UID: \"3109739d-69b7-439a-b6c4-a8affbe0af4f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.404911 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3109739d-69b7-439a-b6c4-a8affbe0af4f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7\" (UID: \"3109739d-69b7-439a-b6c4-a8affbe0af4f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.404951 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3109739d-69b7-439a-b6c4-a8affbe0af4f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7\" (UID: \"3109739d-69b7-439a-b6c4-a8affbe0af4f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.507344 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jlcb\" (UniqueName: \"kubernetes.io/projected/3109739d-69b7-439a-b6c4-a8affbe0af4f-kube-api-access-5jlcb\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7\" (UID: \"3109739d-69b7-439a-b6c4-a8affbe0af4f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.507496 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3109739d-69b7-439a-b6c4-a8affbe0af4f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7\" (UID: \"3109739d-69b7-439a-b6c4-a8affbe0af4f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.507546 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3109739d-69b7-439a-b6c4-a8affbe0af4f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7\" (UID: \"3109739d-69b7-439a-b6c4-a8affbe0af4f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.523770 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3109739d-69b7-439a-b6c4-a8affbe0af4f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7\" (UID: \"3109739d-69b7-439a-b6c4-a8affbe0af4f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.523792 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3109739d-69b7-439a-b6c4-a8affbe0af4f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7\" (UID: \"3109739d-69b7-439a-b6c4-a8affbe0af4f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.528661 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jlcb\" (UniqueName: \"kubernetes.io/projected/3109739d-69b7-439a-b6c4-a8affbe0af4f-kube-api-access-5jlcb\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7\" (UID: \"3109739d-69b7-439a-b6c4-a8affbe0af4f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.657177 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:39 crc kubenswrapper[5010]: I0203 10:36:39.050396 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7"] Feb 03 10:36:39 crc kubenswrapper[5010]: I0203 10:36:39.892893 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" event={"ID":"3109739d-69b7-439a-b6c4-a8affbe0af4f","Type":"ContainerStarted","Data":"45737c1cb8e9fea582eea7ed2cd21ed4f6a6d67483896231864db2a1599dc0be"} Feb 03 10:36:40 crc kubenswrapper[5010]: I0203 10:36:40.922229 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" event={"ID":"3109739d-69b7-439a-b6c4-a8affbe0af4f","Type":"ContainerStarted","Data":"434b05c94a108a94b87c9d056e86bd10915d2cd379e072c24caeee7d45d989df"} Feb 03 10:36:40 crc kubenswrapper[5010]: I0203 10:36:40.949128 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" podStartSLOduration=2.094395483 podStartE2EDuration="2.949102172s" podCreationTimestamp="2026-02-03 10:36:38 +0000 UTC" firstStartedPulling="2026-02-03 10:36:39.063919946 +0000 UTC m=+2069.219896075" lastFinishedPulling="2026-02-03 10:36:39.918626635 +0000 UTC m=+2070.074602764" observedRunningTime="2026-02-03 10:36:40.94200677 +0000 UTC m=+2071.097982899" watchObservedRunningTime="2026-02-03 10:36:40.949102172 +0000 UTC m=+2071.105078291" Feb 03 10:36:45 crc kubenswrapper[5010]: I0203 10:36:45.975817 5010 generic.go:334] "Generic (PLEG): container finished" podID="3109739d-69b7-439a-b6c4-a8affbe0af4f" containerID="434b05c94a108a94b87c9d056e86bd10915d2cd379e072c24caeee7d45d989df" exitCode=0 Feb 03 10:36:45 crc kubenswrapper[5010]: I0203 10:36:45.975906 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" event={"ID":"3109739d-69b7-439a-b6c4-a8affbe0af4f","Type":"ContainerDied","Data":"434b05c94a108a94b87c9d056e86bd10915d2cd379e072c24caeee7d45d989df"} Feb 03 10:36:46 crc kubenswrapper[5010]: I0203 10:36:46.391034 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:36:46 crc kubenswrapper[5010]: I0203 10:36:46.391149 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:36:46 crc kubenswrapper[5010]: I0203 10:36:46.391276 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:36:46 crc kubenswrapper[5010]: I0203 10:36:46.392510 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5dc093ef0ed9c15b3f47adc87cdb7004279d6322628d13c278c955d2873bd2f0"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:36:46 crc kubenswrapper[5010]: I0203 10:36:46.392590 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://5dc093ef0ed9c15b3f47adc87cdb7004279d6322628d13c278c955d2873bd2f0" gracePeriod=600 Feb 03 10:36:47 crc kubenswrapper[5010]: I0203 10:36:47.016490 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="5dc093ef0ed9c15b3f47adc87cdb7004279d6322628d13c278c955d2873bd2f0" exitCode=0 Feb 03 10:36:47 crc kubenswrapper[5010]: I0203 10:36:47.016695 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"5dc093ef0ed9c15b3f47adc87cdb7004279d6322628d13c278c955d2873bd2f0"} Feb 03 10:36:47 crc kubenswrapper[5010]: I0203 10:36:47.016959 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.025040 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.033774 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" event={"ID":"3109739d-69b7-439a-b6c4-a8affbe0af4f","Type":"ContainerDied","Data":"45737c1cb8e9fea582eea7ed2cd21ed4f6a6d67483896231864db2a1599dc0be"} Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.033836 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45737c1cb8e9fea582eea7ed2cd21ed4f6a6d67483896231864db2a1599dc0be" Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.033943 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.036055 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca"} Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.158380 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3109739d-69b7-439a-b6c4-a8affbe0af4f-inventory\") pod \"3109739d-69b7-439a-b6c4-a8affbe0af4f\" (UID: \"3109739d-69b7-439a-b6c4-a8affbe0af4f\") " Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.158449 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3109739d-69b7-439a-b6c4-a8affbe0af4f-ssh-key-openstack-edpm-ipam\") pod \"3109739d-69b7-439a-b6c4-a8affbe0af4f\" (UID: \"3109739d-69b7-439a-b6c4-a8affbe0af4f\") " Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.158534 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jlcb\" (UniqueName: \"kubernetes.io/projected/3109739d-69b7-439a-b6c4-a8affbe0af4f-kube-api-access-5jlcb\") pod \"3109739d-69b7-439a-b6c4-a8affbe0af4f\" (UID: \"3109739d-69b7-439a-b6c4-a8affbe0af4f\") " Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.165685 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3109739d-69b7-439a-b6c4-a8affbe0af4f-kube-api-access-5jlcb" (OuterVolumeSpecName: "kube-api-access-5jlcb") pod "3109739d-69b7-439a-b6c4-a8affbe0af4f" (UID: "3109739d-69b7-439a-b6c4-a8affbe0af4f"). InnerVolumeSpecName "kube-api-access-5jlcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.193562 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3109739d-69b7-439a-b6c4-a8affbe0af4f-inventory" (OuterVolumeSpecName: "inventory") pod "3109739d-69b7-439a-b6c4-a8affbe0af4f" (UID: "3109739d-69b7-439a-b6c4-a8affbe0af4f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.202665 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3109739d-69b7-439a-b6c4-a8affbe0af4f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3109739d-69b7-439a-b6c4-a8affbe0af4f" (UID: "3109739d-69b7-439a-b6c4-a8affbe0af4f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.262969 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3109739d-69b7-439a-b6c4-a8affbe0af4f-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.263026 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3109739d-69b7-439a-b6c4-a8affbe0af4f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.263044 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jlcb\" (UniqueName: \"kubernetes.io/projected/3109739d-69b7-439a-b6c4-a8affbe0af4f-kube-api-access-5jlcb\") on node \"crc\" DevicePath \"\"" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.134307 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx"] Feb 03 10:36:49 crc kubenswrapper[5010]: E0203 10:36:49.135086 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3109739d-69b7-439a-b6c4-a8affbe0af4f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.135104 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3109739d-69b7-439a-b6c4-a8affbe0af4f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.135324 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="3109739d-69b7-439a-b6c4-a8affbe0af4f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.136100 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.138677 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.138941 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.139315 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.139497 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.170967 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx"] Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.290581 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49056616-86cd-41cd-a102-1072dc2a79f4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hz8vx\" (UID: \"49056616-86cd-41cd-a102-1072dc2a79f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.290680 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49056616-86cd-41cd-a102-1072dc2a79f4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hz8vx\" (UID: \"49056616-86cd-41cd-a102-1072dc2a79f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.290747 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sklg\" (UniqueName: \"kubernetes.io/projected/49056616-86cd-41cd-a102-1072dc2a79f4-kube-api-access-2sklg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hz8vx\" (UID: \"49056616-86cd-41cd-a102-1072dc2a79f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.393184 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49056616-86cd-41cd-a102-1072dc2a79f4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hz8vx\" (UID: \"49056616-86cd-41cd-a102-1072dc2a79f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.393303 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49056616-86cd-41cd-a102-1072dc2a79f4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hz8vx\" (UID: \"49056616-86cd-41cd-a102-1072dc2a79f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.393375 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sklg\" (UniqueName: \"kubernetes.io/projected/49056616-86cd-41cd-a102-1072dc2a79f4-kube-api-access-2sklg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hz8vx\" (UID: \"49056616-86cd-41cd-a102-1072dc2a79f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.412714 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49056616-86cd-41cd-a102-1072dc2a79f4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hz8vx\" (UID: \"49056616-86cd-41cd-a102-1072dc2a79f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.412859 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49056616-86cd-41cd-a102-1072dc2a79f4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hz8vx\" (UID: \"49056616-86cd-41cd-a102-1072dc2a79f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.417455 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sklg\" (UniqueName: \"kubernetes.io/projected/49056616-86cd-41cd-a102-1072dc2a79f4-kube-api-access-2sklg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hz8vx\" (UID: \"49056616-86cd-41cd-a102-1072dc2a79f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.464301 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:36:50 crc kubenswrapper[5010]: I0203 10:36:50.112747 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx"] Feb 03 10:36:51 crc kubenswrapper[5010]: I0203 10:36:51.070990 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" event={"ID":"49056616-86cd-41cd-a102-1072dc2a79f4","Type":"ContainerStarted","Data":"8ceab44a914b6581fca750f970dc22a5a0859a72d8fff8bc1ebf38c9e4bf8adb"} Feb 03 10:36:51 crc kubenswrapper[5010]: I0203 10:36:51.097539 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" podStartSLOduration=1.412546588 podStartE2EDuration="2.097499832s" podCreationTimestamp="2026-02-03 10:36:49 +0000 UTC" firstStartedPulling="2026-02-03 10:36:50.129112437 +0000 UTC m=+2080.285088566" lastFinishedPulling="2026-02-03 10:36:50.814065681 +0000 UTC m=+2080.970041810" observedRunningTime="2026-02-03 10:36:51.091862329 +0000 UTC m=+2081.247838468" watchObservedRunningTime="2026-02-03 10:36:51.097499832 +0000 UTC m=+2081.253475981" Feb 03 10:36:52 crc kubenswrapper[5010]: I0203 10:36:52.083364 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" event={"ID":"49056616-86cd-41cd-a102-1072dc2a79f4","Type":"ContainerStarted","Data":"5dd8dd8cf6f829db6c31eb69931ea79632501cf4010715f37a3bb745083ad4c7"} Feb 03 10:37:09 crc kubenswrapper[5010]: I0203 10:37:09.057597 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gd6dz"] Feb 03 10:37:09 crc kubenswrapper[5010]: I0203 10:37:09.068745 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gd6dz"] Feb 03 10:37:10 crc kubenswrapper[5010]: I0203 10:37:10.520700 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ca9130-4a3c-4c64-8557-5c5e29df551d" path="/var/lib/kubelet/pods/49ca9130-4a3c-4c64-8557-5c5e29df551d/volumes" Feb 03 10:37:21 crc kubenswrapper[5010]: I0203 10:37:21.746266 5010 scope.go:117] "RemoveContainer" containerID="4927cc4be235478029139ce32f036f214b152852871af562859aac3f62d37796" Feb 03 10:37:21 crc kubenswrapper[5010]: I0203 10:37:21.782788 5010 scope.go:117] "RemoveContainer" containerID="529624536a7c99d14d746a21069148e69bbb624ecc0d005496493ce4e1241033" Feb 03 10:37:21 crc kubenswrapper[5010]: I0203 10:37:21.862634 5010 scope.go:117] "RemoveContainer" containerID="a966998f1e0d5c656c412830d78b6e892d7c7c270d9300eb5f417be99b11fe63" Feb 03 10:37:21 crc kubenswrapper[5010]: I0203 10:37:21.901474 5010 scope.go:117] "RemoveContainer" containerID="279c8b5f461c06f3191fbc6bb211d5d862c782efbbff978992257a86dd9152d3" Feb 03 10:37:21 crc kubenswrapper[5010]: I0203 10:37:21.956578 5010 scope.go:117] "RemoveContainer" containerID="481559434a2d42e2a028cba399231b55666506a6320e8ddbe78f4de71650ba33" Feb 03 10:37:22 crc kubenswrapper[5010]: I0203 10:37:22.043092 5010 scope.go:117] "RemoveContainer" containerID="277036577a9bb8f26bb26efd4d33210a114ebacd0ae43e4abbbdfbe425f61dd5" Feb 03 10:37:22 crc kubenswrapper[5010]: I0203 10:37:22.078652 5010 scope.go:117] "RemoveContainer" containerID="48902a83c43af8a62b4d6b968a8b3ca68e0101eb2b41fc6cd1fdf99dd7be0466" Feb 03 10:37:28 crc kubenswrapper[5010]: I0203 10:37:28.477125 5010 generic.go:334] "Generic (PLEG): container finished" podID="49056616-86cd-41cd-a102-1072dc2a79f4" containerID="5dd8dd8cf6f829db6c31eb69931ea79632501cf4010715f37a3bb745083ad4c7" exitCode=0 Feb 03 10:37:28 crc kubenswrapper[5010]: I0203 10:37:28.477207 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" event={"ID":"49056616-86cd-41cd-a102-1072dc2a79f4","Type":"ContainerDied","Data":"5dd8dd8cf6f829db6c31eb69931ea79632501cf4010715f37a3bb745083ad4c7"} Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.050687 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.153084 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49056616-86cd-41cd-a102-1072dc2a79f4-ssh-key-openstack-edpm-ipam\") pod \"49056616-86cd-41cd-a102-1072dc2a79f4\" (UID: \"49056616-86cd-41cd-a102-1072dc2a79f4\") " Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.153328 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49056616-86cd-41cd-a102-1072dc2a79f4-inventory\") pod \"49056616-86cd-41cd-a102-1072dc2a79f4\" (UID: \"49056616-86cd-41cd-a102-1072dc2a79f4\") " Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.153512 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sklg\" (UniqueName: \"kubernetes.io/projected/49056616-86cd-41cd-a102-1072dc2a79f4-kube-api-access-2sklg\") pod \"49056616-86cd-41cd-a102-1072dc2a79f4\" (UID: \"49056616-86cd-41cd-a102-1072dc2a79f4\") " Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.162581 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49056616-86cd-41cd-a102-1072dc2a79f4-kube-api-access-2sklg" (OuterVolumeSpecName: "kube-api-access-2sklg") pod "49056616-86cd-41cd-a102-1072dc2a79f4" (UID: "49056616-86cd-41cd-a102-1072dc2a79f4"). InnerVolumeSpecName "kube-api-access-2sklg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.190520 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49056616-86cd-41cd-a102-1072dc2a79f4-inventory" (OuterVolumeSpecName: "inventory") pod "49056616-86cd-41cd-a102-1072dc2a79f4" (UID: "49056616-86cd-41cd-a102-1072dc2a79f4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.190723 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49056616-86cd-41cd-a102-1072dc2a79f4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "49056616-86cd-41cd-a102-1072dc2a79f4" (UID: "49056616-86cd-41cd-a102-1072dc2a79f4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.256538 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49056616-86cd-41cd-a102-1072dc2a79f4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.256593 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49056616-86cd-41cd-a102-1072dc2a79f4-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.256606 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sklg\" (UniqueName: \"kubernetes.io/projected/49056616-86cd-41cd-a102-1072dc2a79f4-kube-api-access-2sklg\") on node \"crc\" DevicePath \"\"" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.500413 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" event={"ID":"49056616-86cd-41cd-a102-1072dc2a79f4","Type":"ContainerDied","Data":"8ceab44a914b6581fca750f970dc22a5a0859a72d8fff8bc1ebf38c9e4bf8adb"} Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.500766 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ceab44a914b6581fca750f970dc22a5a0859a72d8fff8bc1ebf38c9e4bf8adb" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.500476 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.699495 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67"] Feb 03 10:37:30 crc kubenswrapper[5010]: E0203 10:37:30.700007 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49056616-86cd-41cd-a102-1072dc2a79f4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.700041 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="49056616-86cd-41cd-a102-1072dc2a79f4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.700345 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="49056616-86cd-41cd-a102-1072dc2a79f4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.701159 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.703522 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.703739 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.703878 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.704054 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.715763 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67"] Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.777361 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsksm\" (UniqueName: \"kubernetes.io/projected/f4e7c571-ff51-496f-81b8-2fee3f357d3f-kube-api-access-fsksm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ktk67\" (UID: \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.777731 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4e7c571-ff51-496f-81b8-2fee3f357d3f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ktk67\" (UID: \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.777946 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4e7c571-ff51-496f-81b8-2fee3f357d3f-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ktk67\" (UID: \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.879108 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4e7c571-ff51-496f-81b8-2fee3f357d3f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ktk67\" (UID: \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.879236 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4e7c571-ff51-496f-81b8-2fee3f357d3f-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ktk67\" (UID: \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.879281 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsksm\" (UniqueName: \"kubernetes.io/projected/f4e7c571-ff51-496f-81b8-2fee3f357d3f-kube-api-access-fsksm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ktk67\" (UID: \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.885493 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4e7c571-ff51-496f-81b8-2fee3f357d3f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ktk67\" (UID: \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.886934 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4e7c571-ff51-496f-81b8-2fee3f357d3f-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ktk67\" (UID: \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.900265 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsksm\" (UniqueName: \"kubernetes.io/projected/f4e7c571-ff51-496f-81b8-2fee3f357d3f-kube-api-access-fsksm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ktk67\" (UID: \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:37:31 crc kubenswrapper[5010]: I0203 10:37:31.024948 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:37:31 crc kubenswrapper[5010]: I0203 10:37:31.599176 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67"] Feb 03 10:37:32 crc kubenswrapper[5010]: I0203 10:37:32.527454 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" event={"ID":"f4e7c571-ff51-496f-81b8-2fee3f357d3f","Type":"ContainerStarted","Data":"1260438c118656fe4e67ffda841b44ea9f435d72463d4392e2d1bc79c2b65cc4"} Feb 03 10:37:33 crc kubenswrapper[5010]: I0203 10:37:33.544058 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" event={"ID":"f4e7c571-ff51-496f-81b8-2fee3f357d3f","Type":"ContainerStarted","Data":"adefada3395e7a33a2ffaa57c7dcc19ebdacf1eb1ed1e00a028b8ec6c747216c"} Feb 03 10:37:33 crc kubenswrapper[5010]: I0203 10:37:33.574554 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" podStartSLOduration=2.995727611 podStartE2EDuration="3.574516473s" podCreationTimestamp="2026-02-03 10:37:30 +0000 UTC" firstStartedPulling="2026-02-03 10:37:31.61576373 +0000 UTC m=+2121.771739859" lastFinishedPulling="2026-02-03 10:37:32.194552592 +0000 UTC m=+2122.350528721" observedRunningTime="2026-02-03 10:37:33.564621353 +0000 UTC m=+2123.720597492" watchObservedRunningTime="2026-02-03 10:37:33.574516473 +0000 UTC m=+2123.730492612" Feb 03 10:37:37 crc kubenswrapper[5010]: I0203 10:37:37.056635 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zwnxk"] Feb 03 10:37:37 crc kubenswrapper[5010]: I0203 10:37:37.070465 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-bqztf"] Feb 03 10:37:37 crc kubenswrapper[5010]: I0203 10:37:37.080206 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zwnxk"] Feb 03 10:37:37 crc kubenswrapper[5010]: I0203 10:37:37.101202 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-bqztf"] Feb 03 10:37:38 crc kubenswrapper[5010]: I0203 10:37:38.515215 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="726ff8cb-3f2f-41a6-a61e-a79ed194505f" path="/var/lib/kubelet/pods/726ff8cb-3f2f-41a6-a61e-a79ed194505f/volumes" Feb 03 10:37:38 crc kubenswrapper[5010]: I0203 10:37:38.516094 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd352716-06a1-47da-9d5d-179bfed70cbe" path="/var/lib/kubelet/pods/bd352716-06a1-47da-9d5d-179bfed70cbe/volumes" Feb 03 10:38:15 crc kubenswrapper[5010]: I0203 10:38:15.953587 5010 generic.go:334] "Generic (PLEG): container finished" podID="f4e7c571-ff51-496f-81b8-2fee3f357d3f" containerID="adefada3395e7a33a2ffaa57c7dcc19ebdacf1eb1ed1e00a028b8ec6c747216c" exitCode=0 Feb 03 10:38:15 crc kubenswrapper[5010]: I0203 10:38:15.953705 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" event={"ID":"f4e7c571-ff51-496f-81b8-2fee3f357d3f","Type":"ContainerDied","Data":"adefada3395e7a33a2ffaa57c7dcc19ebdacf1eb1ed1e00a028b8ec6c747216c"} Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.408973 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.440930 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsksm\" (UniqueName: \"kubernetes.io/projected/f4e7c571-ff51-496f-81b8-2fee3f357d3f-kube-api-access-fsksm\") pod \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\" (UID: \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\") " Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.440984 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4e7c571-ff51-496f-81b8-2fee3f357d3f-ssh-key-openstack-edpm-ipam\") pod \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\" (UID: \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\") " Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.442279 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4e7c571-ff51-496f-81b8-2fee3f357d3f-inventory\") pod \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\" (UID: \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\") " Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.449735 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4e7c571-ff51-496f-81b8-2fee3f357d3f-kube-api-access-fsksm" (OuterVolumeSpecName: "kube-api-access-fsksm") pod "f4e7c571-ff51-496f-81b8-2fee3f357d3f" (UID: "f4e7c571-ff51-496f-81b8-2fee3f357d3f"). InnerVolumeSpecName "kube-api-access-fsksm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.479790 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4e7c571-ff51-496f-81b8-2fee3f357d3f-inventory" (OuterVolumeSpecName: "inventory") pod "f4e7c571-ff51-496f-81b8-2fee3f357d3f" (UID: "f4e7c571-ff51-496f-81b8-2fee3f357d3f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.487638 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4e7c571-ff51-496f-81b8-2fee3f357d3f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f4e7c571-ff51-496f-81b8-2fee3f357d3f" (UID: "f4e7c571-ff51-496f-81b8-2fee3f357d3f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.545628 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4e7c571-ff51-496f-81b8-2fee3f357d3f-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.548309 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsksm\" (UniqueName: \"kubernetes.io/projected/f4e7c571-ff51-496f-81b8-2fee3f357d3f-kube-api-access-fsksm\") on node \"crc\" DevicePath \"\"" Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.548373 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4e7c571-ff51-496f-81b8-2fee3f357d3f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.974869 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" event={"ID":"f4e7c571-ff51-496f-81b8-2fee3f357d3f","Type":"ContainerDied","Data":"1260438c118656fe4e67ffda841b44ea9f435d72463d4392e2d1bc79c2b65cc4"} Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.974924 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1260438c118656fe4e67ffda841b44ea9f435d72463d4392e2d1bc79c2b65cc4" Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.974956 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.082538 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pfhx5"] Feb 03 10:38:18 crc kubenswrapper[5010]: E0203 10:38:18.083239 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e7c571-ff51-496f-81b8-2fee3f357d3f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.083266 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e7c571-ff51-496f-81b8-2fee3f357d3f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.083660 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4e7c571-ff51-496f-81b8-2fee3f357d3f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.084699 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.087856 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.088121 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.088263 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.088821 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.098393 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pfhx5"] Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.159837 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/67a7675c-9074-4390-85ab-2bba845b2dc0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pfhx5\" (UID: \"67a7675c-9074-4390-85ab-2bba845b2dc0\") " pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.159913 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67a7675c-9074-4390-85ab-2bba845b2dc0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pfhx5\" (UID: \"67a7675c-9074-4390-85ab-2bba845b2dc0\") " pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.160206 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dffpv\" (UniqueName: \"kubernetes.io/projected/67a7675c-9074-4390-85ab-2bba845b2dc0-kube-api-access-dffpv\") pod \"ssh-known-hosts-edpm-deployment-pfhx5\" (UID: \"67a7675c-9074-4390-85ab-2bba845b2dc0\") " pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.262498 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dffpv\" (UniqueName: \"kubernetes.io/projected/67a7675c-9074-4390-85ab-2bba845b2dc0-kube-api-access-dffpv\") pod \"ssh-known-hosts-edpm-deployment-pfhx5\" (UID: \"67a7675c-9074-4390-85ab-2bba845b2dc0\") " pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.262708 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/67a7675c-9074-4390-85ab-2bba845b2dc0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pfhx5\" (UID: \"67a7675c-9074-4390-85ab-2bba845b2dc0\") " pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.262775 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67a7675c-9074-4390-85ab-2bba845b2dc0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pfhx5\" (UID: \"67a7675c-9074-4390-85ab-2bba845b2dc0\") " pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.268130 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67a7675c-9074-4390-85ab-2bba845b2dc0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pfhx5\" (UID: \"67a7675c-9074-4390-85ab-2bba845b2dc0\") " pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.271589 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/67a7675c-9074-4390-85ab-2bba845b2dc0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pfhx5\" (UID: \"67a7675c-9074-4390-85ab-2bba845b2dc0\") " pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.287596 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dffpv\" (UniqueName: \"kubernetes.io/projected/67a7675c-9074-4390-85ab-2bba845b2dc0-kube-api-access-dffpv\") pod \"ssh-known-hosts-edpm-deployment-pfhx5\" (UID: \"67a7675c-9074-4390-85ab-2bba845b2dc0\") " pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.447412 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:19 crc kubenswrapper[5010]: I0203 10:38:19.002025 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pfhx5"] Feb 03 10:38:19 crc kubenswrapper[5010]: I0203 10:38:19.998744 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" event={"ID":"67a7675c-9074-4390-85ab-2bba845b2dc0","Type":"ContainerStarted","Data":"ad84f868170059a7ab2556c16e048551198df5d6e32880c0413f7f752b820801"} Feb 03 10:38:19 crc kubenswrapper[5010]: I0203 10:38:19.999389 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" event={"ID":"67a7675c-9074-4390-85ab-2bba845b2dc0","Type":"ContainerStarted","Data":"16cfb70c1a01a3b03fa245d03b25ae9e33090c913660087a2c06e2a10bb68b25"} Feb 03 10:38:20 crc kubenswrapper[5010]: I0203 10:38:20.023297 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" podStartSLOduration=1.562697649 podStartE2EDuration="2.023264863s" podCreationTimestamp="2026-02-03 10:38:18 +0000 UTC" firstStartedPulling="2026-02-03 10:38:19.015453064 +0000 UTC m=+2169.171429193" lastFinishedPulling="2026-02-03 10:38:19.476020278 +0000 UTC m=+2169.631996407" observedRunningTime="2026-02-03 10:38:20.014173574 +0000 UTC m=+2170.170149723" watchObservedRunningTime="2026-02-03 10:38:20.023264863 +0000 UTC m=+2170.179241012" Feb 03 10:38:21 crc kubenswrapper[5010]: I0203 10:38:21.052501 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-fmn8g"] Feb 03 10:38:21 crc kubenswrapper[5010]: I0203 10:38:21.068686 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-fmn8g"] Feb 03 10:38:22 crc kubenswrapper[5010]: I0203 10:38:22.271792 5010 scope.go:117] "RemoveContainer" containerID="9df92dcb078ed6d52131766accb050ab09c268253b0a5a65b5f79c4623de44a8" Feb 03 10:38:22 crc kubenswrapper[5010]: I0203 10:38:22.340549 5010 scope.go:117] "RemoveContainer" containerID="9ad6b084a459424fdad0649a5c871c7f22695bf5efe4abdfaf37dff65c794a08" Feb 03 10:38:22 crc kubenswrapper[5010]: I0203 10:38:22.522194 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="900a4dd0-c8e2-4416-9a0e-8fff95a5053b" path="/var/lib/kubelet/pods/900a4dd0-c8e2-4416-9a0e-8fff95a5053b/volumes" Feb 03 10:38:27 crc kubenswrapper[5010]: I0203 10:38:27.072329 5010 generic.go:334] "Generic (PLEG): container finished" podID="67a7675c-9074-4390-85ab-2bba845b2dc0" containerID="ad84f868170059a7ab2556c16e048551198df5d6e32880c0413f7f752b820801" exitCode=0 Feb 03 10:38:27 crc kubenswrapper[5010]: I0203 10:38:27.072415 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" event={"ID":"67a7675c-9074-4390-85ab-2bba845b2dc0","Type":"ContainerDied","Data":"ad84f868170059a7ab2556c16e048551198df5d6e32880c0413f7f752b820801"} Feb 03 10:38:28 crc kubenswrapper[5010]: I0203 10:38:28.554156 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:28 crc kubenswrapper[5010]: I0203 10:38:28.614486 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dffpv\" (UniqueName: \"kubernetes.io/projected/67a7675c-9074-4390-85ab-2bba845b2dc0-kube-api-access-dffpv\") pod \"67a7675c-9074-4390-85ab-2bba845b2dc0\" (UID: \"67a7675c-9074-4390-85ab-2bba845b2dc0\") " Feb 03 10:38:28 crc kubenswrapper[5010]: I0203 10:38:28.614805 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67a7675c-9074-4390-85ab-2bba845b2dc0-ssh-key-openstack-edpm-ipam\") pod \"67a7675c-9074-4390-85ab-2bba845b2dc0\" (UID: \"67a7675c-9074-4390-85ab-2bba845b2dc0\") " Feb 03 10:38:28 crc kubenswrapper[5010]: I0203 10:38:28.614920 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/67a7675c-9074-4390-85ab-2bba845b2dc0-inventory-0\") pod \"67a7675c-9074-4390-85ab-2bba845b2dc0\" (UID: \"67a7675c-9074-4390-85ab-2bba845b2dc0\") " Feb 03 10:38:28 crc kubenswrapper[5010]: I0203 10:38:28.637840 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67a7675c-9074-4390-85ab-2bba845b2dc0-kube-api-access-dffpv" (OuterVolumeSpecName: "kube-api-access-dffpv") pod "67a7675c-9074-4390-85ab-2bba845b2dc0" (UID: "67a7675c-9074-4390-85ab-2bba845b2dc0"). InnerVolumeSpecName "kube-api-access-dffpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:38:28 crc kubenswrapper[5010]: I0203 10:38:28.652399 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a7675c-9074-4390-85ab-2bba845b2dc0-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "67a7675c-9074-4390-85ab-2bba845b2dc0" (UID: "67a7675c-9074-4390-85ab-2bba845b2dc0"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:38:28 crc kubenswrapper[5010]: I0203 10:38:28.655172 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a7675c-9074-4390-85ab-2bba845b2dc0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "67a7675c-9074-4390-85ab-2bba845b2dc0" (UID: "67a7675c-9074-4390-85ab-2bba845b2dc0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:38:28 crc kubenswrapper[5010]: I0203 10:38:28.718495 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dffpv\" (UniqueName: \"kubernetes.io/projected/67a7675c-9074-4390-85ab-2bba845b2dc0-kube-api-access-dffpv\") on node \"crc\" DevicePath \"\"" Feb 03 10:38:28 crc kubenswrapper[5010]: I0203 10:38:28.718922 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67a7675c-9074-4390-85ab-2bba845b2dc0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:38:28 crc kubenswrapper[5010]: I0203 10:38:28.718936 5010 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/67a7675c-9074-4390-85ab-2bba845b2dc0-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.092289 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" event={"ID":"67a7675c-9074-4390-85ab-2bba845b2dc0","Type":"ContainerDied","Data":"16cfb70c1a01a3b03fa245d03b25ae9e33090c913660087a2c06e2a10bb68b25"} Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.092353 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16cfb70c1a01a3b03fa245d03b25ae9e33090c913660087a2c06e2a10bb68b25" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.092391 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.196876 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955"] Feb 03 10:38:29 crc kubenswrapper[5010]: E0203 10:38:29.197584 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a7675c-9074-4390-85ab-2bba845b2dc0" containerName="ssh-known-hosts-edpm-deployment" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.197615 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a7675c-9074-4390-85ab-2bba845b2dc0" containerName="ssh-known-hosts-edpm-deployment" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.197859 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="67a7675c-9074-4390-85ab-2bba845b2dc0" containerName="ssh-known-hosts-edpm-deployment" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.198843 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.202874 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.203041 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.203200 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.203303 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.211980 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955"] Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.335492 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9fa7d27-81da-4dcd-adef-cb22c35d2641-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nm955\" (UID: \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.335995 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9fa7d27-81da-4dcd-adef-cb22c35d2641-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nm955\" (UID: \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.336138 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g95dd\" (UniqueName: \"kubernetes.io/projected/a9fa7d27-81da-4dcd-adef-cb22c35d2641-kube-api-access-g95dd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nm955\" (UID: \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.438387 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9fa7d27-81da-4dcd-adef-cb22c35d2641-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nm955\" (UID: \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.438799 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9fa7d27-81da-4dcd-adef-cb22c35d2641-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nm955\" (UID: \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.438903 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g95dd\" (UniqueName: \"kubernetes.io/projected/a9fa7d27-81da-4dcd-adef-cb22c35d2641-kube-api-access-g95dd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nm955\" (UID: \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.443646 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9fa7d27-81da-4dcd-adef-cb22c35d2641-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nm955\" (UID: \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.444070 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9fa7d27-81da-4dcd-adef-cb22c35d2641-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nm955\" (UID: \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.471049 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g95dd\" (UniqueName: \"kubernetes.io/projected/a9fa7d27-81da-4dcd-adef-cb22c35d2641-kube-api-access-g95dd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nm955\" (UID: \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.522014 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:30 crc kubenswrapper[5010]: I0203 10:38:30.092680 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955"] Feb 03 10:38:30 crc kubenswrapper[5010]: I0203 10:38:30.106264 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" event={"ID":"a9fa7d27-81da-4dcd-adef-cb22c35d2641","Type":"ContainerStarted","Data":"3f547aa3ae89e8ad869fa80f68d0d92a3b533f4502565adfe14ea21576437811"} Feb 03 10:38:31 crc kubenswrapper[5010]: I0203 10:38:31.121626 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" event={"ID":"a9fa7d27-81da-4dcd-adef-cb22c35d2641","Type":"ContainerStarted","Data":"a2c1a089ffb9018c1598744774eeab67fd4a670e32068961d30cfdacfb7003cf"} Feb 03 10:38:31 crc kubenswrapper[5010]: I0203 10:38:31.145841 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" podStartSLOduration=1.711113717 podStartE2EDuration="2.145810988s" podCreationTimestamp="2026-02-03 10:38:29 +0000 UTC" firstStartedPulling="2026-02-03 10:38:30.093443352 +0000 UTC m=+2180.249419481" lastFinishedPulling="2026-02-03 10:38:30.528140623 +0000 UTC m=+2180.684116752" observedRunningTime="2026-02-03 10:38:31.143206412 +0000 UTC m=+2181.299182551" watchObservedRunningTime="2026-02-03 10:38:31.145810988 +0000 UTC m=+2181.301787127" Feb 03 10:38:39 crc kubenswrapper[5010]: I0203 10:38:39.214663 5010 generic.go:334] "Generic (PLEG): container finished" podID="a9fa7d27-81da-4dcd-adef-cb22c35d2641" containerID="a2c1a089ffb9018c1598744774eeab67fd4a670e32068961d30cfdacfb7003cf" exitCode=0 Feb 03 10:38:39 crc kubenswrapper[5010]: I0203 10:38:39.214747 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" event={"ID":"a9fa7d27-81da-4dcd-adef-cb22c35d2641","Type":"ContainerDied","Data":"a2c1a089ffb9018c1598744774eeab67fd4a670e32068961d30cfdacfb7003cf"} Feb 03 10:38:40 crc kubenswrapper[5010]: I0203 10:38:40.671322 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:40 crc kubenswrapper[5010]: I0203 10:38:40.697377 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9fa7d27-81da-4dcd-adef-cb22c35d2641-ssh-key-openstack-edpm-ipam\") pod \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\" (UID: \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\") " Feb 03 10:38:40 crc kubenswrapper[5010]: I0203 10:38:40.697533 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g95dd\" (UniqueName: \"kubernetes.io/projected/a9fa7d27-81da-4dcd-adef-cb22c35d2641-kube-api-access-g95dd\") pod \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\" (UID: \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\") " Feb 03 10:38:40 crc kubenswrapper[5010]: I0203 10:38:40.697713 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9fa7d27-81da-4dcd-adef-cb22c35d2641-inventory\") pod \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\" (UID: \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\") " Feb 03 10:38:40 crc kubenswrapper[5010]: I0203 10:38:40.707941 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9fa7d27-81da-4dcd-adef-cb22c35d2641-kube-api-access-g95dd" (OuterVolumeSpecName: "kube-api-access-g95dd") pod "a9fa7d27-81da-4dcd-adef-cb22c35d2641" (UID: "a9fa7d27-81da-4dcd-adef-cb22c35d2641"). InnerVolumeSpecName "kube-api-access-g95dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:38:40 crc kubenswrapper[5010]: I0203 10:38:40.734255 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9fa7d27-81da-4dcd-adef-cb22c35d2641-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a9fa7d27-81da-4dcd-adef-cb22c35d2641" (UID: "a9fa7d27-81da-4dcd-adef-cb22c35d2641"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:38:40 crc kubenswrapper[5010]: I0203 10:38:40.734722 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9fa7d27-81da-4dcd-adef-cb22c35d2641-inventory" (OuterVolumeSpecName: "inventory") pod "a9fa7d27-81da-4dcd-adef-cb22c35d2641" (UID: "a9fa7d27-81da-4dcd-adef-cb22c35d2641"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:38:40 crc kubenswrapper[5010]: I0203 10:38:40.801309 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g95dd\" (UniqueName: \"kubernetes.io/projected/a9fa7d27-81da-4dcd-adef-cb22c35d2641-kube-api-access-g95dd\") on node \"crc\" DevicePath \"\"" Feb 03 10:38:40 crc kubenswrapper[5010]: I0203 10:38:40.801359 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9fa7d27-81da-4dcd-adef-cb22c35d2641-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:38:40 crc kubenswrapper[5010]: I0203 10:38:40.801383 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9fa7d27-81da-4dcd-adef-cb22c35d2641-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.245344 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" event={"ID":"a9fa7d27-81da-4dcd-adef-cb22c35d2641","Type":"ContainerDied","Data":"3f547aa3ae89e8ad869fa80f68d0d92a3b533f4502565adfe14ea21576437811"} Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.245403 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f547aa3ae89e8ad869fa80f68d0d92a3b533f4502565adfe14ea21576437811" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.245437 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.348585 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt"] Feb 03 10:38:41 crc kubenswrapper[5010]: E0203 10:38:41.349303 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9fa7d27-81da-4dcd-adef-cb22c35d2641" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.349333 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9fa7d27-81da-4dcd-adef-cb22c35d2641" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.349647 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9fa7d27-81da-4dcd-adef-cb22c35d2641" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.350750 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.354974 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt"] Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.355429 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.357046 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.357297 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.357528 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.416097 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v98gt\" (UniqueName: \"kubernetes.io/projected/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-kube-api-access-v98gt\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt\" (UID: \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.416494 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt\" (UID: \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.416759 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt\" (UID: \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.519587 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt\" (UID: \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.519768 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt\" (UID: \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.519855 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v98gt\" (UniqueName: \"kubernetes.io/projected/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-kube-api-access-v98gt\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt\" (UID: \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.525112 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt\" (UID: \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.527015 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt\" (UID: \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.541963 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v98gt\" (UniqueName: \"kubernetes.io/projected/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-kube-api-access-v98gt\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt\" (UID: \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.679413 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:42 crc kubenswrapper[5010]: I0203 10:38:42.295110 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt"] Feb 03 10:38:42 crc kubenswrapper[5010]: W0203 10:38:42.302396 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4357ef1_04ea_4dbd_acd8_70f34a5a72a1.slice/crio-eb60583b8ef340e99d80b80a3479341611c17448436cb30d55be356059ffb49f WatchSource:0}: Error finding container eb60583b8ef340e99d80b80a3479341611c17448436cb30d55be356059ffb49f: Status 404 returned error can't find the container with id eb60583b8ef340e99d80b80a3479341611c17448436cb30d55be356059ffb49f Feb 03 10:38:43 crc kubenswrapper[5010]: I0203 10:38:43.267025 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" event={"ID":"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1","Type":"ContainerStarted","Data":"6c7d133f60ff286a66264a98b7f12f03aac4dfb882e4add0318c4b41c3b61c5e"} Feb 03 10:38:43 crc kubenswrapper[5010]: I0203 10:38:43.267094 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" event={"ID":"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1","Type":"ContainerStarted","Data":"eb60583b8ef340e99d80b80a3479341611c17448436cb30d55be356059ffb49f"} Feb 03 10:38:43 crc kubenswrapper[5010]: I0203 10:38:43.299498 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" podStartSLOduration=1.832571597 podStartE2EDuration="2.299459882s" podCreationTimestamp="2026-02-03 10:38:41 +0000 UTC" firstStartedPulling="2026-02-03 10:38:42.307350048 +0000 UTC m=+2192.463326177" lastFinishedPulling="2026-02-03 10:38:42.774238333 +0000 UTC m=+2192.930214462" observedRunningTime="2026-02-03 10:38:43.288804582 +0000 UTC m=+2193.444780721" watchObservedRunningTime="2026-02-03 10:38:43.299459882 +0000 UTC m=+2193.455436021" Feb 03 10:38:46 crc kubenswrapper[5010]: I0203 10:38:46.393203 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:38:46 crc kubenswrapper[5010]: I0203 10:38:46.394452 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:38:53 crc kubenswrapper[5010]: I0203 10:38:53.374846 5010 generic.go:334] "Generic (PLEG): container finished" podID="d4357ef1-04ea-4dbd-acd8-70f34a5a72a1" containerID="6c7d133f60ff286a66264a98b7f12f03aac4dfb882e4add0318c4b41c3b61c5e" exitCode=0 Feb 03 10:38:53 crc kubenswrapper[5010]: I0203 10:38:53.374953 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" event={"ID":"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1","Type":"ContainerDied","Data":"6c7d133f60ff286a66264a98b7f12f03aac4dfb882e4add0318c4b41c3b61c5e"} Feb 03 10:38:54 crc kubenswrapper[5010]: I0203 10:38:54.839410 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:54 crc kubenswrapper[5010]: I0203 10:38:54.960424 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-ssh-key-openstack-edpm-ipam\") pod \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\" (UID: \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\") " Feb 03 10:38:54 crc kubenswrapper[5010]: I0203 10:38:54.961011 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v98gt\" (UniqueName: \"kubernetes.io/projected/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-kube-api-access-v98gt\") pod \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\" (UID: \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\") " Feb 03 10:38:54 crc kubenswrapper[5010]: I0203 10:38:54.961267 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-inventory\") pod \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\" (UID: \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\") " Feb 03 10:38:54 crc kubenswrapper[5010]: I0203 10:38:54.968383 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-kube-api-access-v98gt" (OuterVolumeSpecName: "kube-api-access-v98gt") pod "d4357ef1-04ea-4dbd-acd8-70f34a5a72a1" (UID: "d4357ef1-04ea-4dbd-acd8-70f34a5a72a1"). InnerVolumeSpecName "kube-api-access-v98gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:38:54 crc kubenswrapper[5010]: I0203 10:38:54.999573 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d4357ef1-04ea-4dbd-acd8-70f34a5a72a1" (UID: "d4357ef1-04ea-4dbd-acd8-70f34a5a72a1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.015347 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-inventory" (OuterVolumeSpecName: "inventory") pod "d4357ef1-04ea-4dbd-acd8-70f34a5a72a1" (UID: "d4357ef1-04ea-4dbd-acd8-70f34a5a72a1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.065564 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v98gt\" (UniqueName: \"kubernetes.io/projected/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-kube-api-access-v98gt\") on node \"crc\" DevicePath \"\"" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.065622 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.065669 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.397719 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" event={"ID":"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1","Type":"ContainerDied","Data":"eb60583b8ef340e99d80b80a3479341611c17448436cb30d55be356059ffb49f"} Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.398294 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb60583b8ef340e99d80b80a3479341611c17448436cb30d55be356059ffb49f" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.398394 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.585277 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t"] Feb 03 10:38:55 crc kubenswrapper[5010]: E0203 10:38:55.586419 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4357ef1-04ea-4dbd-acd8-70f34a5a72a1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.586529 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4357ef1-04ea-4dbd-acd8-70f34a5a72a1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.586958 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4357ef1-04ea-4dbd-acd8-70f34a5a72a1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.588232 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.594135 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.594190 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.595288 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.595370 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.595380 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.595757 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.597319 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.598518 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.605785 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t"] Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.783267 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.783366 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.783424 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf48t\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-kube-api-access-bf48t\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.783668 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.783752 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.783829 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.784013 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.784081 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.784383 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.784460 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.784674 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.784790 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.784881 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.784957 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.886994 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.887065 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.887137 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.887174 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.887252 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.887282 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.887322 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.887360 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.887406 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.887445 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.887484 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf48t\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-kube-api-access-bf48t\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.887528 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.887552 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.888118 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.895581 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.895587 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.895607 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.895700 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.895908 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.896592 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.897089 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.897807 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.898395 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.898498 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.901128 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.902651 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.898502 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.906188 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf48t\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-kube-api-access-bf48t\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.914191 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:56 crc kubenswrapper[5010]: I0203 10:38:56.539336 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t"] Feb 03 10:38:57 crc kubenswrapper[5010]: I0203 10:38:57.431117 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" event={"ID":"af6128d5-2369-4ef9-99aa-61ad0bf3b213","Type":"ContainerStarted","Data":"63dca3b86ebc0bedc83753b112381678ddcf76ec0ef2ca15d3c8afd4ecbd5d8f"} Feb 03 10:38:58 crc kubenswrapper[5010]: I0203 10:38:58.442306 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" event={"ID":"af6128d5-2369-4ef9-99aa-61ad0bf3b213","Type":"ContainerStarted","Data":"9a318ac7fe459a01328aa8f01152357fffc9c775f7ce36af393d101490d5caae"} Feb 03 10:38:58 crc kubenswrapper[5010]: I0203 10:38:58.474318 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" podStartSLOduration=2.79230515 podStartE2EDuration="3.474286879s" podCreationTimestamp="2026-02-03 10:38:55 +0000 UTC" firstStartedPulling="2026-02-03 10:38:56.54385561 +0000 UTC m=+2206.699831759" lastFinishedPulling="2026-02-03 10:38:57.225837359 +0000 UTC m=+2207.381813488" observedRunningTime="2026-02-03 10:38:58.465086876 +0000 UTC m=+2208.621063005" watchObservedRunningTime="2026-02-03 10:38:58.474286879 +0000 UTC m=+2208.630263018" Feb 03 10:39:16 crc kubenswrapper[5010]: I0203 10:39:16.390719 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:39:16 crc kubenswrapper[5010]: I0203 10:39:16.391853 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:39:22 crc kubenswrapper[5010]: I0203 10:39:22.452006 5010 scope.go:117] "RemoveContainer" containerID="79dc7129a99144c2e59b3fda9930b79947c9ac7a248d6f8abe7b85572f2f5ea2" Feb 03 10:39:32 crc kubenswrapper[5010]: I0203 10:39:32.842942 5010 generic.go:334] "Generic (PLEG): container finished" podID="af6128d5-2369-4ef9-99aa-61ad0bf3b213" containerID="9a318ac7fe459a01328aa8f01152357fffc9c775f7ce36af393d101490d5caae" exitCode=0 Feb 03 10:39:32 crc kubenswrapper[5010]: I0203 10:39:32.843045 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" event={"ID":"af6128d5-2369-4ef9-99aa-61ad0bf3b213","Type":"ContainerDied","Data":"9a318ac7fe459a01328aa8f01152357fffc9c775f7ce36af393d101490d5caae"} Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.314429 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.379627 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.379677 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.379798 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.379836 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-ovn-combined-ca-bundle\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.379876 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-ssh-key-openstack-edpm-ipam\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.379913 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-ovn-default-certs-0\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.379968 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-libvirt-combined-ca-bundle\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.380073 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-telemetry-combined-ca-bundle\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.380098 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-bootstrap-combined-ca-bundle\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.380131 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf48t\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-kube-api-access-bf48t\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.380201 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-nova-combined-ca-bundle\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.380369 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-inventory\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.380397 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-neutron-metadata-combined-ca-bundle\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.380473 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-repo-setup-combined-ca-bundle\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.390637 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.390909 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.391320 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.391544 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.391799 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.393642 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.393786 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-kube-api-access-bf48t" (OuterVolumeSpecName: "kube-api-access-bf48t") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "kube-api-access-bf48t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.394299 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.396676 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.396911 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.399184 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.410850 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.424663 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.434647 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-inventory" (OuterVolumeSpecName: "inventory") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485088 5010 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485140 5010 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485155 5010 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485168 5010 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485178 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485188 5010 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485198 5010 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485225 5010 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485237 5010 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485246 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf48t\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-kube-api-access-bf48t\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485254 5010 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485263 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485271 5010 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485282 5010 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.868140 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" event={"ID":"af6128d5-2369-4ef9-99aa-61ad0bf3b213","Type":"ContainerDied","Data":"63dca3b86ebc0bedc83753b112381678ddcf76ec0ef2ca15d3c8afd4ecbd5d8f"} Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.868240 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63dca3b86ebc0bedc83753b112381678ddcf76ec0ef2ca15d3c8afd4ecbd5d8f" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.868616 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.993223 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms"] Feb 03 10:39:34 crc kubenswrapper[5010]: E0203 10:39:34.993781 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af6128d5-2369-4ef9-99aa-61ad0bf3b213" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.993803 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6128d5-2369-4ef9-99aa-61ad0bf3b213" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.994033 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="af6128d5-2369-4ef9-99aa-61ad0bf3b213" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.994891 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.998138 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.998344 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.998486 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.999016 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.000851 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.004702 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms"] Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.103907 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.104179 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.104275 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.104362 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xm69\" (UniqueName: \"kubernetes.io/projected/a3aac34b-fb9e-4853-9a1d-c311dc75f055-kube-api-access-4xm69\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.104416 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.207587 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.207664 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.207725 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.207766 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xm69\" (UniqueName: \"kubernetes.io/projected/a3aac34b-fb9e-4853-9a1d-c311dc75f055-kube-api-access-4xm69\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.207803 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.210056 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.215510 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.217096 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.224293 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.230166 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xm69\" (UniqueName: \"kubernetes.io/projected/a3aac34b-fb9e-4853-9a1d-c311dc75f055-kube-api-access-4xm69\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.314338 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.913858 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms"] Feb 03 10:39:36 crc kubenswrapper[5010]: I0203 10:39:36.889564 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" event={"ID":"a3aac34b-fb9e-4853-9a1d-c311dc75f055","Type":"ContainerStarted","Data":"3f52d9d1e92e9e90ce0959d75ce4b497668740336daab15c8282bd36822b5df4"} Feb 03 10:39:36 crc kubenswrapper[5010]: I0203 10:39:36.891412 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" event={"ID":"a3aac34b-fb9e-4853-9a1d-c311dc75f055","Type":"ContainerStarted","Data":"f228078c9d3c1e62c32b6cff959cfdd12494b7ed083a2163851fad632fde6f98"} Feb 03 10:39:36 crc kubenswrapper[5010]: I0203 10:39:36.917974 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" podStartSLOduration=2.48103193 podStartE2EDuration="2.917953288s" podCreationTimestamp="2026-02-03 10:39:34 +0000 UTC" firstStartedPulling="2026-02-03 10:39:35.927122276 +0000 UTC m=+2246.083098405" lastFinishedPulling="2026-02-03 10:39:36.364043634 +0000 UTC m=+2246.520019763" observedRunningTime="2026-02-03 10:39:36.912132171 +0000 UTC m=+2247.068108320" watchObservedRunningTime="2026-02-03 10:39:36.917953288 +0000 UTC m=+2247.073929417" Feb 03 10:39:46 crc kubenswrapper[5010]: I0203 10:39:46.392163 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:39:46 crc kubenswrapper[5010]: I0203 10:39:46.393192 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:39:46 crc kubenswrapper[5010]: I0203 10:39:46.393314 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:39:46 crc kubenswrapper[5010]: I0203 10:39:46.394956 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:39:46 crc kubenswrapper[5010]: I0203 10:39:46.395142 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" gracePeriod=600 Feb 03 10:39:46 crc kubenswrapper[5010]: E0203 10:39:46.533857 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:39:47 crc kubenswrapper[5010]: I0203 10:39:47.018303 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" exitCode=0 Feb 03 10:39:47 crc kubenswrapper[5010]: I0203 10:39:47.018377 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca"} Feb 03 10:39:47 crc kubenswrapper[5010]: I0203 10:39:47.018751 5010 scope.go:117] "RemoveContainer" containerID="5dc093ef0ed9c15b3f47adc87cdb7004279d6322628d13c278c955d2873bd2f0" Feb 03 10:39:47 crc kubenswrapper[5010]: I0203 10:39:47.019741 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:39:47 crc kubenswrapper[5010]: E0203 10:39:47.020060 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.420532 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7nbtm"] Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.424760 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.445636 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7nbtm"] Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.544659 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-catalog-content\") pod \"redhat-operators-7nbtm\" (UID: \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\") " pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.545094 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsvrh\" (UniqueName: \"kubernetes.io/projected/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-kube-api-access-vsvrh\") pod \"redhat-operators-7nbtm\" (UID: \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\") " pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.545256 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-utilities\") pod \"redhat-operators-7nbtm\" (UID: \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\") " pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.648521 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-catalog-content\") pod \"redhat-operators-7nbtm\" (UID: \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\") " pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.648966 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsvrh\" (UniqueName: \"kubernetes.io/projected/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-kube-api-access-vsvrh\") pod \"redhat-operators-7nbtm\" (UID: \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\") " pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.649088 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-utilities\") pod \"redhat-operators-7nbtm\" (UID: \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\") " pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.649412 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-catalog-content\") pod \"redhat-operators-7nbtm\" (UID: \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\") " pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.649978 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-utilities\") pod \"redhat-operators-7nbtm\" (UID: \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\") " pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.673166 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsvrh\" (UniqueName: \"kubernetes.io/projected/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-kube-api-access-vsvrh\") pod \"redhat-operators-7nbtm\" (UID: \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\") " pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.751899 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:39:56 crc kubenswrapper[5010]: I0203 10:39:56.235230 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7nbtm"] Feb 03 10:39:57 crc kubenswrapper[5010]: I0203 10:39:57.142578 5010 generic.go:334] "Generic (PLEG): container finished" podID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" containerID="0493ec8700066e61af014a4570a9d9f8dd96811f6bbcbe5b09486b28fcdfc8b4" exitCode=0 Feb 03 10:39:57 crc kubenswrapper[5010]: I0203 10:39:57.142719 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nbtm" event={"ID":"2f99b9bf-8e73-486e-9a15-bb92116cfcf2","Type":"ContainerDied","Data":"0493ec8700066e61af014a4570a9d9f8dd96811f6bbcbe5b09486b28fcdfc8b4"} Feb 03 10:39:57 crc kubenswrapper[5010]: I0203 10:39:57.143062 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nbtm" event={"ID":"2f99b9bf-8e73-486e-9a15-bb92116cfcf2","Type":"ContainerStarted","Data":"9bc445c008eaa6b813b2b4224ac9fac4cd84d22c820ce73495cef261f897be92"} Feb 03 10:39:58 crc kubenswrapper[5010]: I0203 10:39:58.157366 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nbtm" event={"ID":"2f99b9bf-8e73-486e-9a15-bb92116cfcf2","Type":"ContainerStarted","Data":"a4ca61c7bd90601b3161f840c9e12feceb991569a0a61cdd2c07e9c95a1fd2fe"} Feb 03 10:40:01 crc kubenswrapper[5010]: I0203 10:40:01.191931 5010 generic.go:334] "Generic (PLEG): container finished" podID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" containerID="a4ca61c7bd90601b3161f840c9e12feceb991569a0a61cdd2c07e9c95a1fd2fe" exitCode=0 Feb 03 10:40:01 crc kubenswrapper[5010]: I0203 10:40:01.192323 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nbtm" event={"ID":"2f99b9bf-8e73-486e-9a15-bb92116cfcf2","Type":"ContainerDied","Data":"a4ca61c7bd90601b3161f840c9e12feceb991569a0a61cdd2c07e9c95a1fd2fe"} Feb 03 10:40:02 crc kubenswrapper[5010]: I0203 10:40:02.217321 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nbtm" event={"ID":"2f99b9bf-8e73-486e-9a15-bb92116cfcf2","Type":"ContainerStarted","Data":"dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8"} Feb 03 10:40:02 crc kubenswrapper[5010]: I0203 10:40:02.244346 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7nbtm" podStartSLOduration=2.673857046 podStartE2EDuration="7.244321879s" podCreationTimestamp="2026-02-03 10:39:55 +0000 UTC" firstStartedPulling="2026-02-03 10:39:57.14634947 +0000 UTC m=+2267.302325599" lastFinishedPulling="2026-02-03 10:40:01.716814293 +0000 UTC m=+2271.872790432" observedRunningTime="2026-02-03 10:40:02.24198364 +0000 UTC m=+2272.397959779" watchObservedRunningTime="2026-02-03 10:40:02.244321879 +0000 UTC m=+2272.400298028" Feb 03 10:40:02 crc kubenswrapper[5010]: I0203 10:40:02.502974 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:40:02 crc kubenswrapper[5010]: E0203 10:40:02.503289 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:40:05 crc kubenswrapper[5010]: I0203 10:40:05.752170 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:40:05 crc kubenswrapper[5010]: I0203 10:40:05.752708 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:40:06 crc kubenswrapper[5010]: I0203 10:40:06.807160 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7nbtm" podUID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" containerName="registry-server" probeResult="failure" output=< Feb 03 10:40:06 crc kubenswrapper[5010]: timeout: failed to connect service ":50051" within 1s Feb 03 10:40:06 crc kubenswrapper[5010]: > Feb 03 10:40:14 crc kubenswrapper[5010]: I0203 10:40:14.502734 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:40:14 crc kubenswrapper[5010]: E0203 10:40:14.504123 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:40:15 crc kubenswrapper[5010]: I0203 10:40:15.804681 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:40:15 crc kubenswrapper[5010]: I0203 10:40:15.867278 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:40:16 crc kubenswrapper[5010]: I0203 10:40:16.047502 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7nbtm"] Feb 03 10:40:17 crc kubenswrapper[5010]: I0203 10:40:17.379612 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7nbtm" podUID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" containerName="registry-server" containerID="cri-o://dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8" gracePeriod=2 Feb 03 10:40:17 crc kubenswrapper[5010]: E0203 10:40:17.781329 5010 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f99b9bf_8e73_486e_9a15_bb92116cfcf2.slice/crio-conmon-dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f99b9bf_8e73_486e_9a15_bb92116cfcf2.slice/crio-dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8.scope\": RecentStats: unable to find data in memory cache]" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.189032 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.337387 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-utilities\") pod \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\" (UID: \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\") " Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.337563 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-catalog-content\") pod \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\" (UID: \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\") " Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.337628 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsvrh\" (UniqueName: \"kubernetes.io/projected/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-kube-api-access-vsvrh\") pod \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\" (UID: \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\") " Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.338175 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-utilities" (OuterVolumeSpecName: "utilities") pod "2f99b9bf-8e73-486e-9a15-bb92116cfcf2" (UID: "2f99b9bf-8e73-486e-9a15-bb92116cfcf2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.338399 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.344233 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-kube-api-access-vsvrh" (OuterVolumeSpecName: "kube-api-access-vsvrh") pod "2f99b9bf-8e73-486e-9a15-bb92116cfcf2" (UID: "2f99b9bf-8e73-486e-9a15-bb92116cfcf2"). InnerVolumeSpecName "kube-api-access-vsvrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.394295 5010 generic.go:334] "Generic (PLEG): container finished" podID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" containerID="dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8" exitCode=0 Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.394361 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nbtm" event={"ID":"2f99b9bf-8e73-486e-9a15-bb92116cfcf2","Type":"ContainerDied","Data":"dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8"} Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.394403 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nbtm" event={"ID":"2f99b9bf-8e73-486e-9a15-bb92116cfcf2","Type":"ContainerDied","Data":"9bc445c008eaa6b813b2b4224ac9fac4cd84d22c820ce73495cef261f897be92"} Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.394400 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.394422 5010 scope.go:117] "RemoveContainer" containerID="dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.423590 5010 scope.go:117] "RemoveContainer" containerID="a4ca61c7bd90601b3161f840c9e12feceb991569a0a61cdd2c07e9c95a1fd2fe" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.441910 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsvrh\" (UniqueName: \"kubernetes.io/projected/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-kube-api-access-vsvrh\") on node \"crc\" DevicePath \"\"" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.456842 5010 scope.go:117] "RemoveContainer" containerID="0493ec8700066e61af014a4570a9d9f8dd96811f6bbcbe5b09486b28fcdfc8b4" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.555192 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f99b9bf-8e73-486e-9a15-bb92116cfcf2" (UID: "2f99b9bf-8e73-486e-9a15-bb92116cfcf2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.563302 5010 scope.go:117] "RemoveContainer" containerID="dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8" Feb 03 10:40:18 crc kubenswrapper[5010]: E0203 10:40:18.565874 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8\": container with ID starting with dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8 not found: ID does not exist" containerID="dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.565945 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8"} err="failed to get container status \"dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8\": rpc error: code = NotFound desc = could not find container \"dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8\": container with ID starting with dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8 not found: ID does not exist" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.565977 5010 scope.go:117] "RemoveContainer" containerID="a4ca61c7bd90601b3161f840c9e12feceb991569a0a61cdd2c07e9c95a1fd2fe" Feb 03 10:40:18 crc kubenswrapper[5010]: E0203 10:40:18.571007 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4ca61c7bd90601b3161f840c9e12feceb991569a0a61cdd2c07e9c95a1fd2fe\": container with ID starting with a4ca61c7bd90601b3161f840c9e12feceb991569a0a61cdd2c07e9c95a1fd2fe not found: ID does not exist" containerID="a4ca61c7bd90601b3161f840c9e12feceb991569a0a61cdd2c07e9c95a1fd2fe" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.571079 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4ca61c7bd90601b3161f840c9e12feceb991569a0a61cdd2c07e9c95a1fd2fe"} err="failed to get container status \"a4ca61c7bd90601b3161f840c9e12feceb991569a0a61cdd2c07e9c95a1fd2fe\": rpc error: code = NotFound desc = could not find container \"a4ca61c7bd90601b3161f840c9e12feceb991569a0a61cdd2c07e9c95a1fd2fe\": container with ID starting with a4ca61c7bd90601b3161f840c9e12feceb991569a0a61cdd2c07e9c95a1fd2fe not found: ID does not exist" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.571119 5010 scope.go:117] "RemoveContainer" containerID="0493ec8700066e61af014a4570a9d9f8dd96811f6bbcbe5b09486b28fcdfc8b4" Feb 03 10:40:18 crc kubenswrapper[5010]: E0203 10:40:18.571741 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0493ec8700066e61af014a4570a9d9f8dd96811f6bbcbe5b09486b28fcdfc8b4\": container with ID starting with 0493ec8700066e61af014a4570a9d9f8dd96811f6bbcbe5b09486b28fcdfc8b4 not found: ID does not exist" containerID="0493ec8700066e61af014a4570a9d9f8dd96811f6bbcbe5b09486b28fcdfc8b4" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.571826 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0493ec8700066e61af014a4570a9d9f8dd96811f6bbcbe5b09486b28fcdfc8b4"} err="failed to get container status \"0493ec8700066e61af014a4570a9d9f8dd96811f6bbcbe5b09486b28fcdfc8b4\": rpc error: code = NotFound desc = could not find container \"0493ec8700066e61af014a4570a9d9f8dd96811f6bbcbe5b09486b28fcdfc8b4\": container with ID starting with 0493ec8700066e61af014a4570a9d9f8dd96811f6bbcbe5b09486b28fcdfc8b4 not found: ID does not exist" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.651454 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.724506 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7nbtm"] Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.733491 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7nbtm"] Feb 03 10:40:20 crc kubenswrapper[5010]: I0203 10:40:20.519612 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" path="/var/lib/kubelet/pods/2f99b9bf-8e73-486e-9a15-bb92116cfcf2/volumes" Feb 03 10:40:28 crc kubenswrapper[5010]: I0203 10:40:28.503010 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:40:28 crc kubenswrapper[5010]: E0203 10:40:28.503949 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:40:37 crc kubenswrapper[5010]: I0203 10:40:37.600753 5010 generic.go:334] "Generic (PLEG): container finished" podID="a3aac34b-fb9e-4853-9a1d-c311dc75f055" containerID="3f52d9d1e92e9e90ce0959d75ce4b497668740336daab15c8282bd36822b5df4" exitCode=0 Feb 03 10:40:37 crc kubenswrapper[5010]: I0203 10:40:37.600880 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" event={"ID":"a3aac34b-fb9e-4853-9a1d-c311dc75f055","Type":"ContainerDied","Data":"3f52d9d1e92e9e90ce0959d75ce4b497668740336daab15c8282bd36822b5df4"} Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.058257 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.071399 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ovncontroller-config-0\") pod \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.071471 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-inventory\") pod \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.071497 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ssh-key-openstack-edpm-ipam\") pod \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.071554 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ovn-combined-ca-bundle\") pod \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.071618 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xm69\" (UniqueName: \"kubernetes.io/projected/a3aac34b-fb9e-4853-9a1d-c311dc75f055-kube-api-access-4xm69\") pod \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.094821 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3aac34b-fb9e-4853-9a1d-c311dc75f055-kube-api-access-4xm69" (OuterVolumeSpecName: "kube-api-access-4xm69") pod "a3aac34b-fb9e-4853-9a1d-c311dc75f055" (UID: "a3aac34b-fb9e-4853-9a1d-c311dc75f055"). InnerVolumeSpecName "kube-api-access-4xm69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.100151 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a3aac34b-fb9e-4853-9a1d-c311dc75f055" (UID: "a3aac34b-fb9e-4853-9a1d-c311dc75f055"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.118848 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a3aac34b-fb9e-4853-9a1d-c311dc75f055" (UID: "a3aac34b-fb9e-4853-9a1d-c311dc75f055"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.119163 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "a3aac34b-fb9e-4853-9a1d-c311dc75f055" (UID: "a3aac34b-fb9e-4853-9a1d-c311dc75f055"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.124729 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-inventory" (OuterVolumeSpecName: "inventory") pod "a3aac34b-fb9e-4853-9a1d-c311dc75f055" (UID: "a3aac34b-fb9e-4853-9a1d-c311dc75f055"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.174980 5010 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.175032 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.175050 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.175066 5010 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.175081 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xm69\" (UniqueName: \"kubernetes.io/projected/a3aac34b-fb9e-4853-9a1d-c311dc75f055-kube-api-access-4xm69\") on node \"crc\" DevicePath \"\"" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.631296 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" event={"ID":"a3aac34b-fb9e-4853-9a1d-c311dc75f055","Type":"ContainerDied","Data":"f228078c9d3c1e62c32b6cff959cfdd12494b7ed083a2163851fad632fde6f98"} Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.631832 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f228078c9d3c1e62c32b6cff959cfdd12494b7ed083a2163851fad632fde6f98" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.631443 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.752611 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p"] Feb 03 10:40:39 crc kubenswrapper[5010]: E0203 10:40:39.753197 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" containerName="extract-utilities" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.753240 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" containerName="extract-utilities" Feb 03 10:40:39 crc kubenswrapper[5010]: E0203 10:40:39.753284 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3aac34b-fb9e-4853-9a1d-c311dc75f055" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.753294 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3aac34b-fb9e-4853-9a1d-c311dc75f055" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 03 10:40:39 crc kubenswrapper[5010]: E0203 10:40:39.753319 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" containerName="registry-server" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.753327 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" containerName="registry-server" Feb 03 10:40:39 crc kubenswrapper[5010]: E0203 10:40:39.753339 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" containerName="extract-content" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.753347 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" containerName="extract-content" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.753597 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" containerName="registry-server" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.753638 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3aac34b-fb9e-4853-9a1d-c311dc75f055" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.754447 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.760909 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.761596 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.761675 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.761934 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.762073 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.773433 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.774970 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p"] Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.787665 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.787730 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.787762 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.787802 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ctpk\" (UniqueName: \"kubernetes.io/projected/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-kube-api-access-8ctpk\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.788102 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.788198 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.890056 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ctpk\" (UniqueName: \"kubernetes.io/projected/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-kube-api-access-8ctpk\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.890299 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.890361 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.890396 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.890425 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.890452 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.897092 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.898065 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.898550 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.902560 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.907360 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.915389 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ctpk\" (UniqueName: \"kubernetes.io/projected/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-kube-api-access-8ctpk\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:40 crc kubenswrapper[5010]: I0203 10:40:40.091865 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:40 crc kubenswrapper[5010]: I0203 10:40:40.509358 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:40:40 crc kubenswrapper[5010]: E0203 10:40:40.510309 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:40:40 crc kubenswrapper[5010]: I0203 10:40:40.677727 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p"] Feb 03 10:40:41 crc kubenswrapper[5010]: I0203 10:40:41.651820 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" event={"ID":"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e","Type":"ContainerStarted","Data":"353e88e11bb683a6d69babb16cd3d7bdaabf21b7deb3b73ec560099bb2acad68"} Feb 03 10:40:41 crc kubenswrapper[5010]: I0203 10:40:41.652371 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" event={"ID":"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e","Type":"ContainerStarted","Data":"c85bd6b31d4790e41a050bcdc12b1527bf94989144fef23a23b08a1424662ce1"} Feb 03 10:40:41 crc kubenswrapper[5010]: I0203 10:40:41.676816 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" podStartSLOduration=2.175466593 podStartE2EDuration="2.676795588s" podCreationTimestamp="2026-02-03 10:40:39 +0000 UTC" firstStartedPulling="2026-02-03 10:40:40.690824179 +0000 UTC m=+2310.846800308" lastFinishedPulling="2026-02-03 10:40:41.192153174 +0000 UTC m=+2311.348129303" observedRunningTime="2026-02-03 10:40:41.669318809 +0000 UTC m=+2311.825294938" watchObservedRunningTime="2026-02-03 10:40:41.676795588 +0000 UTC m=+2311.832771707" Feb 03 10:40:52 crc kubenswrapper[5010]: I0203 10:40:52.503479 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:40:52 crc kubenswrapper[5010]: E0203 10:40:52.504380 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:41:03 crc kubenswrapper[5010]: I0203 10:41:03.502862 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:41:03 crc kubenswrapper[5010]: E0203 10:41:03.504003 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:41:15 crc kubenswrapper[5010]: I0203 10:41:15.503052 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:41:15 crc kubenswrapper[5010]: E0203 10:41:15.504275 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:41:27 crc kubenswrapper[5010]: I0203 10:41:27.501919 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:41:27 crc kubenswrapper[5010]: E0203 10:41:27.502945 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:41:28 crc kubenswrapper[5010]: I0203 10:41:28.123850 5010 generic.go:334] "Generic (PLEG): container finished" podID="4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e" containerID="353e88e11bb683a6d69babb16cd3d7bdaabf21b7deb3b73ec560099bb2acad68" exitCode=0 Feb 03 10:41:28 crc kubenswrapper[5010]: I0203 10:41:28.123885 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" event={"ID":"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e","Type":"ContainerDied","Data":"353e88e11bb683a6d69babb16cd3d7bdaabf21b7deb3b73ec560099bb2acad68"} Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.599072 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.723380 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-inventory\") pod \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.723443 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ctpk\" (UniqueName: \"kubernetes.io/projected/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-kube-api-access-8ctpk\") pod \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.723750 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-ssh-key-openstack-edpm-ipam\") pod \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.723789 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-neutron-metadata-combined-ca-bundle\") pod \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.723892 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.723955 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-nova-metadata-neutron-config-0\") pod \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.732574 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e" (UID: "4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.740732 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-kube-api-access-8ctpk" (OuterVolumeSpecName: "kube-api-access-8ctpk") pod "4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e" (UID: "4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e"). InnerVolumeSpecName "kube-api-access-8ctpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.760840 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e" (UID: "4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.764182 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-inventory" (OuterVolumeSpecName: "inventory") pod "4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e" (UID: "4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.764643 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e" (UID: "4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.776703 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e" (UID: "4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.826520 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.826571 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ctpk\" (UniqueName: \"kubernetes.io/projected/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-kube-api-access-8ctpk\") on node \"crc\" DevicePath \"\"" Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.826593 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.826609 5010 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.826626 5010 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.826642 5010 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.146656 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" event={"ID":"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e","Type":"ContainerDied","Data":"c85bd6b31d4790e41a050bcdc12b1527bf94989144fef23a23b08a1424662ce1"} Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.147488 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c85bd6b31d4790e41a050bcdc12b1527bf94989144fef23a23b08a1424662ce1" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.146708 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.242716 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d"] Feb 03 10:41:30 crc kubenswrapper[5010]: E0203 10:41:30.243363 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.243395 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.243692 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.244612 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.249964 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.250142 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.250623 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.252320 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.252620 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.255027 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d"] Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.337673 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.337752 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.338071 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5fnn\" (UniqueName: \"kubernetes.io/projected/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-kube-api-access-p5fnn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.338721 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.338886 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.442038 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5fnn\" (UniqueName: \"kubernetes.io/projected/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-kube-api-access-p5fnn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.442406 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.442471 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.442549 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.442579 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.448800 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.449467 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.451806 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.452294 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.465041 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5fnn\" (UniqueName: \"kubernetes.io/projected/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-kube-api-access-p5fnn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.572933 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:31 crc kubenswrapper[5010]: I0203 10:41:31.125161 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d"] Feb 03 10:41:31 crc kubenswrapper[5010]: I0203 10:41:31.128013 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 10:41:31 crc kubenswrapper[5010]: I0203 10:41:31.171054 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" event={"ID":"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d","Type":"ContainerStarted","Data":"27b2e3f9236cd72b126e3e7945fd42412d1ecde36745e5349c8e93bb4dc3e0ba"} Feb 03 10:41:32 crc kubenswrapper[5010]: I0203 10:41:32.181818 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" event={"ID":"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d","Type":"ContainerStarted","Data":"dc60d854ffb0ca1de8c7268f0cc8371c9a244cdbcc3aab97ecb9ef8424edbc47"} Feb 03 10:41:32 crc kubenswrapper[5010]: I0203 10:41:32.216500 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" podStartSLOduration=1.733853201 podStartE2EDuration="2.216475578s" podCreationTimestamp="2026-02-03 10:41:30 +0000 UTC" firstStartedPulling="2026-02-03 10:41:31.127713886 +0000 UTC m=+2361.283690015" lastFinishedPulling="2026-02-03 10:41:31.610336263 +0000 UTC m=+2361.766312392" observedRunningTime="2026-02-03 10:41:32.211501595 +0000 UTC m=+2362.367477724" watchObservedRunningTime="2026-02-03 10:41:32.216475578 +0000 UTC m=+2362.372451707" Feb 03 10:41:41 crc kubenswrapper[5010]: I0203 10:41:41.502780 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:41:41 crc kubenswrapper[5010]: E0203 10:41:41.503451 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:41:55 crc kubenswrapper[5010]: I0203 10:41:55.504336 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:41:55 crc kubenswrapper[5010]: E0203 10:41:55.505235 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:42:07 crc kubenswrapper[5010]: I0203 10:42:07.502435 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:42:07 crc kubenswrapper[5010]: E0203 10:42:07.503311 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:42:18 crc kubenswrapper[5010]: I0203 10:42:18.502975 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:42:18 crc kubenswrapper[5010]: E0203 10:42:18.504064 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:42:33 crc kubenswrapper[5010]: I0203 10:42:33.503350 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:42:33 crc kubenswrapper[5010]: E0203 10:42:33.504246 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:42:44 crc kubenswrapper[5010]: I0203 10:42:44.503916 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:42:44 crc kubenswrapper[5010]: E0203 10:42:44.505571 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:42:57 crc kubenswrapper[5010]: I0203 10:42:57.503013 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:42:57 crc kubenswrapper[5010]: E0203 10:42:57.504290 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:43:08 crc kubenswrapper[5010]: I0203 10:43:08.502944 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:43:08 crc kubenswrapper[5010]: E0203 10:43:08.505729 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:43:22 crc kubenswrapper[5010]: I0203 10:43:22.503183 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:43:22 crc kubenswrapper[5010]: E0203 10:43:22.504478 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:43:36 crc kubenswrapper[5010]: I0203 10:43:36.502517 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:43:36 crc kubenswrapper[5010]: E0203 10:43:36.505165 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:43:47 crc kubenswrapper[5010]: I0203 10:43:47.502692 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:43:47 crc kubenswrapper[5010]: E0203 10:43:47.503771 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:44:01 crc kubenswrapper[5010]: I0203 10:44:01.502368 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:44:01 crc kubenswrapper[5010]: E0203 10:44:01.503665 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.009581 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ljhkd"] Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.013810 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.055091 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ljhkd"] Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.125050 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d017619-3ae1-48aa-aff8-d66d1f176806-utilities\") pod \"community-operators-ljhkd\" (UID: \"0d017619-3ae1-48aa-aff8-d66d1f176806\") " pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.125166 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d017619-3ae1-48aa-aff8-d66d1f176806-catalog-content\") pod \"community-operators-ljhkd\" (UID: \"0d017619-3ae1-48aa-aff8-d66d1f176806\") " pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.125320 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwsxd\" (UniqueName: \"kubernetes.io/projected/0d017619-3ae1-48aa-aff8-d66d1f176806-kube-api-access-cwsxd\") pod \"community-operators-ljhkd\" (UID: \"0d017619-3ae1-48aa-aff8-d66d1f176806\") " pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.227639 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d017619-3ae1-48aa-aff8-d66d1f176806-utilities\") pod \"community-operators-ljhkd\" (UID: \"0d017619-3ae1-48aa-aff8-d66d1f176806\") " pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.227713 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d017619-3ae1-48aa-aff8-d66d1f176806-catalog-content\") pod \"community-operators-ljhkd\" (UID: \"0d017619-3ae1-48aa-aff8-d66d1f176806\") " pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.227843 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwsxd\" (UniqueName: \"kubernetes.io/projected/0d017619-3ae1-48aa-aff8-d66d1f176806-kube-api-access-cwsxd\") pod \"community-operators-ljhkd\" (UID: \"0d017619-3ae1-48aa-aff8-d66d1f176806\") " pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.228565 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d017619-3ae1-48aa-aff8-d66d1f176806-catalog-content\") pod \"community-operators-ljhkd\" (UID: \"0d017619-3ae1-48aa-aff8-d66d1f176806\") " pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.228862 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d017619-3ae1-48aa-aff8-d66d1f176806-utilities\") pod \"community-operators-ljhkd\" (UID: \"0d017619-3ae1-48aa-aff8-d66d1f176806\") " pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.258506 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwsxd\" (UniqueName: \"kubernetes.io/projected/0d017619-3ae1-48aa-aff8-d66d1f176806-kube-api-access-cwsxd\") pod \"community-operators-ljhkd\" (UID: \"0d017619-3ae1-48aa-aff8-d66d1f176806\") " pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.358958 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:04 crc kubenswrapper[5010]: I0203 10:44:04.056786 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ljhkd"] Feb 03 10:44:04 crc kubenswrapper[5010]: I0203 10:44:04.421084 5010 generic.go:334] "Generic (PLEG): container finished" podID="0d017619-3ae1-48aa-aff8-d66d1f176806" containerID="fab7f3bbda7f8de106f5a09ff1198783291792c10be97733b3f72a4e73a547fd" exitCode=0 Feb 03 10:44:04 crc kubenswrapper[5010]: I0203 10:44:04.421234 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljhkd" event={"ID":"0d017619-3ae1-48aa-aff8-d66d1f176806","Type":"ContainerDied","Data":"fab7f3bbda7f8de106f5a09ff1198783291792c10be97733b3f72a4e73a547fd"} Feb 03 10:44:04 crc kubenswrapper[5010]: I0203 10:44:04.421569 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljhkd" event={"ID":"0d017619-3ae1-48aa-aff8-d66d1f176806","Type":"ContainerStarted","Data":"ff8fcc1fa2ecc7eec7f5fd63831a577ce4a0643c9612428d734263739d579a21"} Feb 03 10:44:05 crc kubenswrapper[5010]: I0203 10:44:05.436096 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljhkd" event={"ID":"0d017619-3ae1-48aa-aff8-d66d1f176806","Type":"ContainerStarted","Data":"ef76ba3add7a763104f7648f68a14558e1c39fbfc1e2d61b5f71994e15a7a7d1"} Feb 03 10:44:06 crc kubenswrapper[5010]: I0203 10:44:06.450500 5010 generic.go:334] "Generic (PLEG): container finished" podID="0d017619-3ae1-48aa-aff8-d66d1f176806" containerID="ef76ba3add7a763104f7648f68a14558e1c39fbfc1e2d61b5f71994e15a7a7d1" exitCode=0 Feb 03 10:44:06 crc kubenswrapper[5010]: I0203 10:44:06.450636 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljhkd" event={"ID":"0d017619-3ae1-48aa-aff8-d66d1f176806","Type":"ContainerDied","Data":"ef76ba3add7a763104f7648f68a14558e1c39fbfc1e2d61b5f71994e15a7a7d1"} Feb 03 10:44:08 crc kubenswrapper[5010]: I0203 10:44:08.472739 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljhkd" event={"ID":"0d017619-3ae1-48aa-aff8-d66d1f176806","Type":"ContainerStarted","Data":"512a029216a528a2623119f8633f77e481b1b71064ff8ef79eee80c6c8d52d24"} Feb 03 10:44:08 crc kubenswrapper[5010]: I0203 10:44:08.494063 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ljhkd" podStartSLOduration=3.6406730080000003 podStartE2EDuration="6.494043586s" podCreationTimestamp="2026-02-03 10:44:02 +0000 UTC" firstStartedPulling="2026-02-03 10:44:04.423270652 +0000 UTC m=+2514.579246781" lastFinishedPulling="2026-02-03 10:44:07.27664123 +0000 UTC m=+2517.432617359" observedRunningTime="2026-02-03 10:44:08.491569595 +0000 UTC m=+2518.647545734" watchObservedRunningTime="2026-02-03 10:44:08.494043586 +0000 UTC m=+2518.650019715" Feb 03 10:44:12 crc kubenswrapper[5010]: I0203 10:44:12.949563 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:44:12 crc kubenswrapper[5010]: E0203 10:44:12.950464 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:44:13 crc kubenswrapper[5010]: I0203 10:44:13.359419 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:13 crc kubenswrapper[5010]: I0203 10:44:13.359467 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:13 crc kubenswrapper[5010]: I0203 10:44:13.412589 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:14 crc kubenswrapper[5010]: I0203 10:44:14.362644 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:14 crc kubenswrapper[5010]: I0203 10:44:14.432317 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ljhkd"] Feb 03 10:44:16 crc kubenswrapper[5010]: I0203 10:44:16.314276 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ljhkd" podUID="0d017619-3ae1-48aa-aff8-d66d1f176806" containerName="registry-server" containerID="cri-o://512a029216a528a2623119f8633f77e481b1b71064ff8ef79eee80c6c8d52d24" gracePeriod=2 Feb 03 10:44:17 crc kubenswrapper[5010]: I0203 10:44:17.328848 5010 generic.go:334] "Generic (PLEG): container finished" podID="0d017619-3ae1-48aa-aff8-d66d1f176806" containerID="512a029216a528a2623119f8633f77e481b1b71064ff8ef79eee80c6c8d52d24" exitCode=0 Feb 03 10:44:17 crc kubenswrapper[5010]: I0203 10:44:17.328939 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljhkd" event={"ID":"0d017619-3ae1-48aa-aff8-d66d1f176806","Type":"ContainerDied","Data":"512a029216a528a2623119f8633f77e481b1b71064ff8ef79eee80c6c8d52d24"} Feb 03 10:44:17 crc kubenswrapper[5010]: I0203 10:44:17.982828 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.053685 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d017619-3ae1-48aa-aff8-d66d1f176806-catalog-content\") pod \"0d017619-3ae1-48aa-aff8-d66d1f176806\" (UID: \"0d017619-3ae1-48aa-aff8-d66d1f176806\") " Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.053810 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwsxd\" (UniqueName: \"kubernetes.io/projected/0d017619-3ae1-48aa-aff8-d66d1f176806-kube-api-access-cwsxd\") pod \"0d017619-3ae1-48aa-aff8-d66d1f176806\" (UID: \"0d017619-3ae1-48aa-aff8-d66d1f176806\") " Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.054099 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d017619-3ae1-48aa-aff8-d66d1f176806-utilities\") pod \"0d017619-3ae1-48aa-aff8-d66d1f176806\" (UID: \"0d017619-3ae1-48aa-aff8-d66d1f176806\") " Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.055190 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d017619-3ae1-48aa-aff8-d66d1f176806-utilities" (OuterVolumeSpecName: "utilities") pod "0d017619-3ae1-48aa-aff8-d66d1f176806" (UID: "0d017619-3ae1-48aa-aff8-d66d1f176806"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.064308 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d017619-3ae1-48aa-aff8-d66d1f176806-kube-api-access-cwsxd" (OuterVolumeSpecName: "kube-api-access-cwsxd") pod "0d017619-3ae1-48aa-aff8-d66d1f176806" (UID: "0d017619-3ae1-48aa-aff8-d66d1f176806"). InnerVolumeSpecName "kube-api-access-cwsxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.119488 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d017619-3ae1-48aa-aff8-d66d1f176806-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d017619-3ae1-48aa-aff8-d66d1f176806" (UID: "0d017619-3ae1-48aa-aff8-d66d1f176806"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.156962 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d017619-3ae1-48aa-aff8-d66d1f176806-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.157014 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d017619-3ae1-48aa-aff8-d66d1f176806-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.157028 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwsxd\" (UniqueName: \"kubernetes.io/projected/0d017619-3ae1-48aa-aff8-d66d1f176806-kube-api-access-cwsxd\") on node \"crc\" DevicePath \"\"" Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.344150 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljhkd" event={"ID":"0d017619-3ae1-48aa-aff8-d66d1f176806","Type":"ContainerDied","Data":"ff8fcc1fa2ecc7eec7f5fd63831a577ce4a0643c9612428d734263739d579a21"} Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.344255 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.344264 5010 scope.go:117] "RemoveContainer" containerID="512a029216a528a2623119f8633f77e481b1b71064ff8ef79eee80c6c8d52d24" Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.395296 5010 scope.go:117] "RemoveContainer" containerID="ef76ba3add7a763104f7648f68a14558e1c39fbfc1e2d61b5f71994e15a7a7d1" Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.404678 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ljhkd"] Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.416165 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ljhkd"] Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.430208 5010 scope.go:117] "RemoveContainer" containerID="fab7f3bbda7f8de106f5a09ff1198783291792c10be97733b3f72a4e73a547fd" Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.521339 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d017619-3ae1-48aa-aff8-d66d1f176806" path="/var/lib/kubelet/pods/0d017619-3ae1-48aa-aff8-d66d1f176806/volumes" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.236115 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hclqp"] Feb 03 10:44:20 crc kubenswrapper[5010]: E0203 10:44:20.236989 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d017619-3ae1-48aa-aff8-d66d1f176806" containerName="extract-utilities" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.237008 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d017619-3ae1-48aa-aff8-d66d1f176806" containerName="extract-utilities" Feb 03 10:44:20 crc kubenswrapper[5010]: E0203 10:44:20.237023 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d017619-3ae1-48aa-aff8-d66d1f176806" containerName="registry-server" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.237031 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d017619-3ae1-48aa-aff8-d66d1f176806" containerName="registry-server" Feb 03 10:44:20 crc kubenswrapper[5010]: E0203 10:44:20.237068 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d017619-3ae1-48aa-aff8-d66d1f176806" containerName="extract-content" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.237078 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d017619-3ae1-48aa-aff8-d66d1f176806" containerName="extract-content" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.237324 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d017619-3ae1-48aa-aff8-d66d1f176806" containerName="registry-server" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.239148 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.257630 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hclqp"] Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.307745 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b028f28-bcda-4f8c-9203-28d3ca53b83f-utilities\") pod \"redhat-marketplace-hclqp\" (UID: \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\") " pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.307825 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b028f28-bcda-4f8c-9203-28d3ca53b83f-catalog-content\") pod \"redhat-marketplace-hclqp\" (UID: \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\") " pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.308143 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lj7w\" (UniqueName: \"kubernetes.io/projected/4b028f28-bcda-4f8c-9203-28d3ca53b83f-kube-api-access-6lj7w\") pod \"redhat-marketplace-hclqp\" (UID: \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\") " pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.411562 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b028f28-bcda-4f8c-9203-28d3ca53b83f-utilities\") pod \"redhat-marketplace-hclqp\" (UID: \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\") " pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.411627 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b028f28-bcda-4f8c-9203-28d3ca53b83f-catalog-content\") pod \"redhat-marketplace-hclqp\" (UID: \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\") " pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.411710 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lj7w\" (UniqueName: \"kubernetes.io/projected/4b028f28-bcda-4f8c-9203-28d3ca53b83f-kube-api-access-6lj7w\") pod \"redhat-marketplace-hclqp\" (UID: \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\") " pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.412524 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b028f28-bcda-4f8c-9203-28d3ca53b83f-utilities\") pod \"redhat-marketplace-hclqp\" (UID: \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\") " pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.412547 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b028f28-bcda-4f8c-9203-28d3ca53b83f-catalog-content\") pod \"redhat-marketplace-hclqp\" (UID: \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\") " pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.437013 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lj7w\" (UniqueName: \"kubernetes.io/projected/4b028f28-bcda-4f8c-9203-28d3ca53b83f-kube-api-access-6lj7w\") pod \"redhat-marketplace-hclqp\" (UID: \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\") " pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.568894 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:21 crc kubenswrapper[5010]: I0203 10:44:21.227859 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hclqp"] Feb 03 10:44:21 crc kubenswrapper[5010]: I0203 10:44:21.390661 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hclqp" event={"ID":"4b028f28-bcda-4f8c-9203-28d3ca53b83f","Type":"ContainerStarted","Data":"dc03929ced3815aaa6a44ceafc9ccce5fe2d5067d9e2ca6ab02e4bd24f776596"} Feb 03 10:44:22 crc kubenswrapper[5010]: I0203 10:44:22.402546 5010 generic.go:334] "Generic (PLEG): container finished" podID="4b028f28-bcda-4f8c-9203-28d3ca53b83f" containerID="ab457186781e2f3c1f15b4a02801684211ab2bfbee31f941c5d4642d2d943e0a" exitCode=0 Feb 03 10:44:22 crc kubenswrapper[5010]: I0203 10:44:22.402889 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hclqp" event={"ID":"4b028f28-bcda-4f8c-9203-28d3ca53b83f","Type":"ContainerDied","Data":"ab457186781e2f3c1f15b4a02801684211ab2bfbee31f941c5d4642d2d943e0a"} Feb 03 10:44:23 crc kubenswrapper[5010]: I0203 10:44:23.420637 5010 generic.go:334] "Generic (PLEG): container finished" podID="4b028f28-bcda-4f8c-9203-28d3ca53b83f" containerID="d60c85535a2e54fde92aeedae00f9ef230eade1f3f31cd23645a983b4134a2ee" exitCode=0 Feb 03 10:44:23 crc kubenswrapper[5010]: I0203 10:44:23.420732 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hclqp" event={"ID":"4b028f28-bcda-4f8c-9203-28d3ca53b83f","Type":"ContainerDied","Data":"d60c85535a2e54fde92aeedae00f9ef230eade1f3f31cd23645a983b4134a2ee"} Feb 03 10:44:24 crc kubenswrapper[5010]: I0203 10:44:24.436429 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hclqp" event={"ID":"4b028f28-bcda-4f8c-9203-28d3ca53b83f","Type":"ContainerStarted","Data":"1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb"} Feb 03 10:44:24 crc kubenswrapper[5010]: I0203 10:44:24.466096 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hclqp" podStartSLOduration=3.02439172 podStartE2EDuration="4.466068684s" podCreationTimestamp="2026-02-03 10:44:20 +0000 UTC" firstStartedPulling="2026-02-03 10:44:22.406496514 +0000 UTC m=+2532.562472643" lastFinishedPulling="2026-02-03 10:44:23.848173478 +0000 UTC m=+2534.004149607" observedRunningTime="2026-02-03 10:44:24.460806204 +0000 UTC m=+2534.616782343" watchObservedRunningTime="2026-02-03 10:44:24.466068684 +0000 UTC m=+2534.622044813" Feb 03 10:44:24 crc kubenswrapper[5010]: I0203 10:44:24.503436 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:44:24 crc kubenswrapper[5010]: E0203 10:44:24.503733 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:44:30 crc kubenswrapper[5010]: I0203 10:44:30.569571 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:30 crc kubenswrapper[5010]: I0203 10:44:30.571723 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:30 crc kubenswrapper[5010]: I0203 10:44:30.620327 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:31 crc kubenswrapper[5010]: I0203 10:44:31.575719 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:31 crc kubenswrapper[5010]: I0203 10:44:31.640259 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hclqp"] Feb 03 10:44:33 crc kubenswrapper[5010]: I0203 10:44:33.541743 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hclqp" podUID="4b028f28-bcda-4f8c-9203-28d3ca53b83f" containerName="registry-server" containerID="cri-o://1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb" gracePeriod=2 Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.184973 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.279068 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b028f28-bcda-4f8c-9203-28d3ca53b83f-utilities\") pod \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\" (UID: \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\") " Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.279254 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lj7w\" (UniqueName: \"kubernetes.io/projected/4b028f28-bcda-4f8c-9203-28d3ca53b83f-kube-api-access-6lj7w\") pod \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\" (UID: \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\") " Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.279359 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b028f28-bcda-4f8c-9203-28d3ca53b83f-catalog-content\") pod \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\" (UID: \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\") " Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.280403 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b028f28-bcda-4f8c-9203-28d3ca53b83f-utilities" (OuterVolumeSpecName: "utilities") pod "4b028f28-bcda-4f8c-9203-28d3ca53b83f" (UID: "4b028f28-bcda-4f8c-9203-28d3ca53b83f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.289621 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b028f28-bcda-4f8c-9203-28d3ca53b83f-kube-api-access-6lj7w" (OuterVolumeSpecName: "kube-api-access-6lj7w") pod "4b028f28-bcda-4f8c-9203-28d3ca53b83f" (UID: "4b028f28-bcda-4f8c-9203-28d3ca53b83f"). InnerVolumeSpecName "kube-api-access-6lj7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.310538 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b028f28-bcda-4f8c-9203-28d3ca53b83f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b028f28-bcda-4f8c-9203-28d3ca53b83f" (UID: "4b028f28-bcda-4f8c-9203-28d3ca53b83f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.382123 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b028f28-bcda-4f8c-9203-28d3ca53b83f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.382168 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b028f28-bcda-4f8c-9203-28d3ca53b83f-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.382184 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lj7w\" (UniqueName: \"kubernetes.io/projected/4b028f28-bcda-4f8c-9203-28d3ca53b83f-kube-api-access-6lj7w\") on node \"crc\" DevicePath \"\"" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.553469 5010 generic.go:334] "Generic (PLEG): container finished" podID="4b028f28-bcda-4f8c-9203-28d3ca53b83f" containerID="1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb" exitCode=0 Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.553537 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hclqp" event={"ID":"4b028f28-bcda-4f8c-9203-28d3ca53b83f","Type":"ContainerDied","Data":"1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb"} Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.553879 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hclqp" event={"ID":"4b028f28-bcda-4f8c-9203-28d3ca53b83f","Type":"ContainerDied","Data":"dc03929ced3815aaa6a44ceafc9ccce5fe2d5067d9e2ca6ab02e4bd24f776596"} Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.553917 5010 scope.go:117] "RemoveContainer" containerID="1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.553616 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.580203 5010 scope.go:117] "RemoveContainer" containerID="d60c85535a2e54fde92aeedae00f9ef230eade1f3f31cd23645a983b4134a2ee" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.598706 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hclqp"] Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.609522 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hclqp"] Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.619286 5010 scope.go:117] "RemoveContainer" containerID="ab457186781e2f3c1f15b4a02801684211ab2bfbee31f941c5d4642d2d943e0a" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.656663 5010 scope.go:117] "RemoveContainer" containerID="1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb" Feb 03 10:44:34 crc kubenswrapper[5010]: E0203 10:44:34.657362 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb\": container with ID starting with 1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb not found: ID does not exist" containerID="1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.657411 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb"} err="failed to get container status \"1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb\": rpc error: code = NotFound desc = could not find container \"1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb\": container with ID starting with 1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb not found: ID does not exist" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.657442 5010 scope.go:117] "RemoveContainer" containerID="d60c85535a2e54fde92aeedae00f9ef230eade1f3f31cd23645a983b4134a2ee" Feb 03 10:44:34 crc kubenswrapper[5010]: E0203 10:44:34.657751 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d60c85535a2e54fde92aeedae00f9ef230eade1f3f31cd23645a983b4134a2ee\": container with ID starting with d60c85535a2e54fde92aeedae00f9ef230eade1f3f31cd23645a983b4134a2ee not found: ID does not exist" containerID="d60c85535a2e54fde92aeedae00f9ef230eade1f3f31cd23645a983b4134a2ee" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.657862 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d60c85535a2e54fde92aeedae00f9ef230eade1f3f31cd23645a983b4134a2ee"} err="failed to get container status \"d60c85535a2e54fde92aeedae00f9ef230eade1f3f31cd23645a983b4134a2ee\": rpc error: code = NotFound desc = could not find container \"d60c85535a2e54fde92aeedae00f9ef230eade1f3f31cd23645a983b4134a2ee\": container with ID starting with d60c85535a2e54fde92aeedae00f9ef230eade1f3f31cd23645a983b4134a2ee not found: ID does not exist" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.657957 5010 scope.go:117] "RemoveContainer" containerID="ab457186781e2f3c1f15b4a02801684211ab2bfbee31f941c5d4642d2d943e0a" Feb 03 10:44:34 crc kubenswrapper[5010]: E0203 10:44:34.658310 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab457186781e2f3c1f15b4a02801684211ab2bfbee31f941c5d4642d2d943e0a\": container with ID starting with ab457186781e2f3c1f15b4a02801684211ab2bfbee31f941c5d4642d2d943e0a not found: ID does not exist" containerID="ab457186781e2f3c1f15b4a02801684211ab2bfbee31f941c5d4642d2d943e0a" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.658415 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab457186781e2f3c1f15b4a02801684211ab2bfbee31f941c5d4642d2d943e0a"} err="failed to get container status \"ab457186781e2f3c1f15b4a02801684211ab2bfbee31f941c5d4642d2d943e0a\": rpc error: code = NotFound desc = could not find container \"ab457186781e2f3c1f15b4a02801684211ab2bfbee31f941c5d4642d2d943e0a\": container with ID starting with ab457186781e2f3c1f15b4a02801684211ab2bfbee31f941c5d4642d2d943e0a not found: ID does not exist" Feb 03 10:44:35 crc kubenswrapper[5010]: I0203 10:44:35.503150 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:44:35 crc kubenswrapper[5010]: E0203 10:44:35.503939 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:44:36 crc kubenswrapper[5010]: I0203 10:44:36.515625 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b028f28-bcda-4f8c-9203-28d3ca53b83f" path="/var/lib/kubelet/pods/4b028f28-bcda-4f8c-9203-28d3ca53b83f/volumes" Feb 03 10:44:47 crc kubenswrapper[5010]: I0203 10:44:47.502711 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:44:48 crc kubenswrapper[5010]: I0203 10:44:48.703178 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"b61671ae7473626ed1f7e8bbc62ee5800e0d1f9237e36316dd37140b902ac261"} Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.170321 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb"] Feb 03 10:45:00 crc kubenswrapper[5010]: E0203 10:45:00.172265 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b028f28-bcda-4f8c-9203-28d3ca53b83f" containerName="extract-content" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.172286 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b028f28-bcda-4f8c-9203-28d3ca53b83f" containerName="extract-content" Feb 03 10:45:00 crc kubenswrapper[5010]: E0203 10:45:00.172327 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b028f28-bcda-4f8c-9203-28d3ca53b83f" containerName="extract-utilities" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.172337 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b028f28-bcda-4f8c-9203-28d3ca53b83f" containerName="extract-utilities" Feb 03 10:45:00 crc kubenswrapper[5010]: E0203 10:45:00.172371 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b028f28-bcda-4f8c-9203-28d3ca53b83f" containerName="registry-server" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.172398 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b028f28-bcda-4f8c-9203-28d3ca53b83f" containerName="registry-server" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.172687 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b028f28-bcda-4f8c-9203-28d3ca53b83f" containerName="registry-server" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.174003 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.176657 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.178684 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.190997 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb"] Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.201966 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f097429-a5b4-4a4a-8b81-6194870abf2e-secret-volume\") pod \"collect-profiles-29501925-nmkzb\" (UID: \"8f097429-a5b4-4a4a-8b81-6194870abf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.202050 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdcrq\" (UniqueName: \"kubernetes.io/projected/8f097429-a5b4-4a4a-8b81-6194870abf2e-kube-api-access-hdcrq\") pod \"collect-profiles-29501925-nmkzb\" (UID: \"8f097429-a5b4-4a4a-8b81-6194870abf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.202087 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f097429-a5b4-4a4a-8b81-6194870abf2e-config-volume\") pod \"collect-profiles-29501925-nmkzb\" (UID: \"8f097429-a5b4-4a4a-8b81-6194870abf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.304533 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdcrq\" (UniqueName: \"kubernetes.io/projected/8f097429-a5b4-4a4a-8b81-6194870abf2e-kube-api-access-hdcrq\") pod \"collect-profiles-29501925-nmkzb\" (UID: \"8f097429-a5b4-4a4a-8b81-6194870abf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.304609 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f097429-a5b4-4a4a-8b81-6194870abf2e-config-volume\") pod \"collect-profiles-29501925-nmkzb\" (UID: \"8f097429-a5b4-4a4a-8b81-6194870abf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.306115 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f097429-a5b4-4a4a-8b81-6194870abf2e-secret-volume\") pod \"collect-profiles-29501925-nmkzb\" (UID: \"8f097429-a5b4-4a4a-8b81-6194870abf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.306776 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f097429-a5b4-4a4a-8b81-6194870abf2e-config-volume\") pod \"collect-profiles-29501925-nmkzb\" (UID: \"8f097429-a5b4-4a4a-8b81-6194870abf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.316094 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f097429-a5b4-4a4a-8b81-6194870abf2e-secret-volume\") pod \"collect-profiles-29501925-nmkzb\" (UID: \"8f097429-a5b4-4a4a-8b81-6194870abf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.327564 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdcrq\" (UniqueName: \"kubernetes.io/projected/8f097429-a5b4-4a4a-8b81-6194870abf2e-kube-api-access-hdcrq\") pod \"collect-profiles-29501925-nmkzb\" (UID: \"8f097429-a5b4-4a4a-8b81-6194870abf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.509740 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:01 crc kubenswrapper[5010]: I0203 10:45:01.019394 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb"] Feb 03 10:45:01 crc kubenswrapper[5010]: W0203 10:45:01.023809 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f097429_a5b4_4a4a_8b81_6194870abf2e.slice/crio-7cb7db4695300eb847dfc5ba9e2d7a41baea67d3357353b3d4f124680a6934ee WatchSource:0}: Error finding container 7cb7db4695300eb847dfc5ba9e2d7a41baea67d3357353b3d4f124680a6934ee: Status 404 returned error can't find the container with id 7cb7db4695300eb847dfc5ba9e2d7a41baea67d3357353b3d4f124680a6934ee Feb 03 10:45:01 crc kubenswrapper[5010]: I0203 10:45:01.856441 5010 generic.go:334] "Generic (PLEG): container finished" podID="8f097429-a5b4-4a4a-8b81-6194870abf2e" containerID="7d241b2d31d82749007029bfa402aa0fd6743ec37cf714478cf0ae1697c8b93d" exitCode=0 Feb 03 10:45:01 crc kubenswrapper[5010]: I0203 10:45:01.856559 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" event={"ID":"8f097429-a5b4-4a4a-8b81-6194870abf2e","Type":"ContainerDied","Data":"7d241b2d31d82749007029bfa402aa0fd6743ec37cf714478cf0ae1697c8b93d"} Feb 03 10:45:01 crc kubenswrapper[5010]: I0203 10:45:01.856879 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" event={"ID":"8f097429-a5b4-4a4a-8b81-6194870abf2e","Type":"ContainerStarted","Data":"7cb7db4695300eb847dfc5ba9e2d7a41baea67d3357353b3d4f124680a6934ee"} Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.314251 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.406165 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f097429-a5b4-4a4a-8b81-6194870abf2e-config-volume\") pod \"8f097429-a5b4-4a4a-8b81-6194870abf2e\" (UID: \"8f097429-a5b4-4a4a-8b81-6194870abf2e\") " Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.406502 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdcrq\" (UniqueName: \"kubernetes.io/projected/8f097429-a5b4-4a4a-8b81-6194870abf2e-kube-api-access-hdcrq\") pod \"8f097429-a5b4-4a4a-8b81-6194870abf2e\" (UID: \"8f097429-a5b4-4a4a-8b81-6194870abf2e\") " Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.406598 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f097429-a5b4-4a4a-8b81-6194870abf2e-secret-volume\") pod \"8f097429-a5b4-4a4a-8b81-6194870abf2e\" (UID: \"8f097429-a5b4-4a4a-8b81-6194870abf2e\") " Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.406976 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f097429-a5b4-4a4a-8b81-6194870abf2e-config-volume" (OuterVolumeSpecName: "config-volume") pod "8f097429-a5b4-4a4a-8b81-6194870abf2e" (UID: "8f097429-a5b4-4a4a-8b81-6194870abf2e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.407505 5010 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f097429-a5b4-4a4a-8b81-6194870abf2e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.414286 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f097429-a5b4-4a4a-8b81-6194870abf2e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8f097429-a5b4-4a4a-8b81-6194870abf2e" (UID: "8f097429-a5b4-4a4a-8b81-6194870abf2e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.416582 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f097429-a5b4-4a4a-8b81-6194870abf2e-kube-api-access-hdcrq" (OuterVolumeSpecName: "kube-api-access-hdcrq") pod "8f097429-a5b4-4a4a-8b81-6194870abf2e" (UID: "8f097429-a5b4-4a4a-8b81-6194870abf2e"). InnerVolumeSpecName "kube-api-access-hdcrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.508566 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdcrq\" (UniqueName: \"kubernetes.io/projected/8f097429-a5b4-4a4a-8b81-6194870abf2e-kube-api-access-hdcrq\") on node \"crc\" DevicePath \"\"" Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.508606 5010 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f097429-a5b4-4a4a-8b81-6194870abf2e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.879115 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" event={"ID":"8f097429-a5b4-4a4a-8b81-6194870abf2e","Type":"ContainerDied","Data":"7cb7db4695300eb847dfc5ba9e2d7a41baea67d3357353b3d4f124680a6934ee"} Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.879613 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cb7db4695300eb847dfc5ba9e2d7a41baea67d3357353b3d4f124680a6934ee" Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.879264 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:04 crc kubenswrapper[5010]: I0203 10:45:04.455283 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp"] Feb 03 10:45:04 crc kubenswrapper[5010]: I0203 10:45:04.465103 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp"] Feb 03 10:45:04 crc kubenswrapper[5010]: I0203 10:45:04.513571 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b9c4aab-790c-4581-bfc2-ad1d7302c704" path="/var/lib/kubelet/pods/9b9c4aab-790c-4581-bfc2-ad1d7302c704/volumes" Feb 03 10:45:15 crc kubenswrapper[5010]: I0203 10:45:15.000703 5010 generic.go:334] "Generic (PLEG): container finished" podID="5b7ff70c-1251-4fd5-a71c-bf6703bcc85d" containerID="dc60d854ffb0ca1de8c7268f0cc8371c9a244cdbcc3aab97ecb9ef8424edbc47" exitCode=0 Feb 03 10:45:15 crc kubenswrapper[5010]: I0203 10:45:15.000803 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" event={"ID":"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d","Type":"ContainerDied","Data":"dc60d854ffb0ca1de8c7268f0cc8371c9a244cdbcc3aab97ecb9ef8424edbc47"} Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.618158 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.767070 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-libvirt-secret-0\") pod \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.767242 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-inventory\") pod \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.767306 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5fnn\" (UniqueName: \"kubernetes.io/projected/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-kube-api-access-p5fnn\") pod \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.767469 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-ssh-key-openstack-edpm-ipam\") pod \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.767503 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-libvirt-combined-ca-bundle\") pod \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.775330 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "5b7ff70c-1251-4fd5-a71c-bf6703bcc85d" (UID: "5b7ff70c-1251-4fd5-a71c-bf6703bcc85d"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.775458 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-kube-api-access-p5fnn" (OuterVolumeSpecName: "kube-api-access-p5fnn") pod "5b7ff70c-1251-4fd5-a71c-bf6703bcc85d" (UID: "5b7ff70c-1251-4fd5-a71c-bf6703bcc85d"). InnerVolumeSpecName "kube-api-access-p5fnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.802767 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5b7ff70c-1251-4fd5-a71c-bf6703bcc85d" (UID: "5b7ff70c-1251-4fd5-a71c-bf6703bcc85d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.813117 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "5b7ff70c-1251-4fd5-a71c-bf6703bcc85d" (UID: "5b7ff70c-1251-4fd5-a71c-bf6703bcc85d"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.821519 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-inventory" (OuterVolumeSpecName: "inventory") pod "5b7ff70c-1251-4fd5-a71c-bf6703bcc85d" (UID: "5b7ff70c-1251-4fd5-a71c-bf6703bcc85d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.870628 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.870692 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5fnn\" (UniqueName: \"kubernetes.io/projected/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-kube-api-access-p5fnn\") on node \"crc\" DevicePath \"\"" Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.870705 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.870715 5010 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.870732 5010 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.025409 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" event={"ID":"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d","Type":"ContainerDied","Data":"27b2e3f9236cd72b126e3e7945fd42412d1ecde36745e5349c8e93bb4dc3e0ba"} Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.025843 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27b2e3f9236cd72b126e3e7945fd42412d1ecde36745e5349c8e93bb4dc3e0ba" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.025677 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.145537 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5"] Feb 03 10:45:17 crc kubenswrapper[5010]: E0203 10:45:17.146004 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f097429-a5b4-4a4a-8b81-6194870abf2e" containerName="collect-profiles" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.146026 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f097429-a5b4-4a4a-8b81-6194870abf2e" containerName="collect-profiles" Feb 03 10:45:17 crc kubenswrapper[5010]: E0203 10:45:17.146064 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b7ff70c-1251-4fd5-a71c-bf6703bcc85d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.146080 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b7ff70c-1251-4fd5-a71c-bf6703bcc85d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.146434 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b7ff70c-1251-4fd5-a71c-bf6703bcc85d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.146466 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f097429-a5b4-4a4a-8b81-6194870abf2e" containerName="collect-profiles" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.147368 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.149661 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.149812 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.149976 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.151200 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.151397 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.151441 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.152441 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.164291 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5"] Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.284791 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.284920 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.284956 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.284985 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.285153 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.285229 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb2f6\" (UniqueName: \"kubernetes.io/projected/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-kube-api-access-wb2f6\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.285286 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.285385 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.285618 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.387615 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.387679 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.387710 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.387746 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.387774 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb2f6\" (UniqueName: \"kubernetes.io/projected/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-kube-api-access-wb2f6\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.387837 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.387873 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.388383 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.388933 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.389270 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.392910 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.394940 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.395080 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.395237 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.397490 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.403064 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.404015 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.408949 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb2f6\" (UniqueName: \"kubernetes.io/projected/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-kube-api-access-wb2f6\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.471533 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:18 crc kubenswrapper[5010]: I0203 10:45:18.036085 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5"] Feb 03 10:45:19 crc kubenswrapper[5010]: I0203 10:45:19.050710 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" event={"ID":"6fd37dcf-e81a-491a-a5e1-01a27517d1b4","Type":"ContainerStarted","Data":"b92d5a51c76184465825f539bc982313c7d3a25990aaa74ff31547c87be3d118"} Feb 03 10:45:19 crc kubenswrapper[5010]: I0203 10:45:19.051364 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" event={"ID":"6fd37dcf-e81a-491a-a5e1-01a27517d1b4","Type":"ContainerStarted","Data":"3bdabc9e7c7a1e119a5dd6eb67d8df00ac4cf05c96ad5b5ff0ff7555b937fc53"} Feb 03 10:45:19 crc kubenswrapper[5010]: I0203 10:45:19.078531 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" podStartSLOduration=1.617542332 podStartE2EDuration="2.078498475s" podCreationTimestamp="2026-02-03 10:45:17 +0000 UTC" firstStartedPulling="2026-02-03 10:45:18.044249998 +0000 UTC m=+2588.200226127" lastFinishedPulling="2026-02-03 10:45:18.505206141 +0000 UTC m=+2588.661182270" observedRunningTime="2026-02-03 10:45:19.075299896 +0000 UTC m=+2589.231276035" watchObservedRunningTime="2026-02-03 10:45:19.078498475 +0000 UTC m=+2589.234474614" Feb 03 10:45:22 crc kubenswrapper[5010]: I0203 10:45:22.682868 5010 scope.go:117] "RemoveContainer" containerID="15e10260ef913b6b44e27ef0b7816cd144403f167a0779e8880ec7a69901a07c" Feb 03 10:46:11 crc kubenswrapper[5010]: I0203 10:46:11.808979 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8mzgl"] Feb 03 10:46:11 crc kubenswrapper[5010]: I0203 10:46:11.811871 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:11 crc kubenswrapper[5010]: I0203 10:46:11.838142 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8mzgl"] Feb 03 10:46:11 crc kubenswrapper[5010]: I0203 10:46:11.889623 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pcnv\" (UniqueName: \"kubernetes.io/projected/f79efd93-79ed-4459-9345-c203dd95ce20-kube-api-access-9pcnv\") pod \"certified-operators-8mzgl\" (UID: \"f79efd93-79ed-4459-9345-c203dd95ce20\") " pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:11 crc kubenswrapper[5010]: I0203 10:46:11.889702 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79efd93-79ed-4459-9345-c203dd95ce20-utilities\") pod \"certified-operators-8mzgl\" (UID: \"f79efd93-79ed-4459-9345-c203dd95ce20\") " pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:11 crc kubenswrapper[5010]: I0203 10:46:11.889807 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79efd93-79ed-4459-9345-c203dd95ce20-catalog-content\") pod \"certified-operators-8mzgl\" (UID: \"f79efd93-79ed-4459-9345-c203dd95ce20\") " pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:11 crc kubenswrapper[5010]: I0203 10:46:11.991246 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pcnv\" (UniqueName: \"kubernetes.io/projected/f79efd93-79ed-4459-9345-c203dd95ce20-kube-api-access-9pcnv\") pod \"certified-operators-8mzgl\" (UID: \"f79efd93-79ed-4459-9345-c203dd95ce20\") " pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:11 crc kubenswrapper[5010]: I0203 10:46:11.991722 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79efd93-79ed-4459-9345-c203dd95ce20-utilities\") pod \"certified-operators-8mzgl\" (UID: \"f79efd93-79ed-4459-9345-c203dd95ce20\") " pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:11 crc kubenswrapper[5010]: I0203 10:46:11.991844 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79efd93-79ed-4459-9345-c203dd95ce20-catalog-content\") pod \"certified-operators-8mzgl\" (UID: \"f79efd93-79ed-4459-9345-c203dd95ce20\") " pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:11 crc kubenswrapper[5010]: I0203 10:46:11.992647 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79efd93-79ed-4459-9345-c203dd95ce20-utilities\") pod \"certified-operators-8mzgl\" (UID: \"f79efd93-79ed-4459-9345-c203dd95ce20\") " pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:11 crc kubenswrapper[5010]: I0203 10:46:11.992710 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79efd93-79ed-4459-9345-c203dd95ce20-catalog-content\") pod \"certified-operators-8mzgl\" (UID: \"f79efd93-79ed-4459-9345-c203dd95ce20\") " pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:12 crc kubenswrapper[5010]: I0203 10:46:12.016251 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pcnv\" (UniqueName: \"kubernetes.io/projected/f79efd93-79ed-4459-9345-c203dd95ce20-kube-api-access-9pcnv\") pod \"certified-operators-8mzgl\" (UID: \"f79efd93-79ed-4459-9345-c203dd95ce20\") " pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:12 crc kubenswrapper[5010]: I0203 10:46:12.132764 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:12 crc kubenswrapper[5010]: I0203 10:46:12.659244 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8mzgl"] Feb 03 10:46:13 crc kubenswrapper[5010]: I0203 10:46:13.586074 5010 generic.go:334] "Generic (PLEG): container finished" podID="f79efd93-79ed-4459-9345-c203dd95ce20" containerID="c562ce2f172268f30d32a1149d741246ef07fbb3b595aefe0237a71dafd6fb85" exitCode=0 Feb 03 10:46:13 crc kubenswrapper[5010]: I0203 10:46:13.586278 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mzgl" event={"ID":"f79efd93-79ed-4459-9345-c203dd95ce20","Type":"ContainerDied","Data":"c562ce2f172268f30d32a1149d741246ef07fbb3b595aefe0237a71dafd6fb85"} Feb 03 10:46:13 crc kubenswrapper[5010]: I0203 10:46:13.586464 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mzgl" event={"ID":"f79efd93-79ed-4459-9345-c203dd95ce20","Type":"ContainerStarted","Data":"863be305abcb465ebca5aea60206885e34000c983fd7bff7e9942058c39d5010"} Feb 03 10:46:15 crc kubenswrapper[5010]: I0203 10:46:15.609033 5010 generic.go:334] "Generic (PLEG): container finished" podID="f79efd93-79ed-4459-9345-c203dd95ce20" containerID="98045c0699142a35fd580ee03bbd2be538447b9d3d6388b6e76f2677074cfdb0" exitCode=0 Feb 03 10:46:15 crc kubenswrapper[5010]: I0203 10:46:15.609155 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mzgl" event={"ID":"f79efd93-79ed-4459-9345-c203dd95ce20","Type":"ContainerDied","Data":"98045c0699142a35fd580ee03bbd2be538447b9d3d6388b6e76f2677074cfdb0"} Feb 03 10:46:16 crc kubenswrapper[5010]: I0203 10:46:16.633249 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mzgl" event={"ID":"f79efd93-79ed-4459-9345-c203dd95ce20","Type":"ContainerStarted","Data":"5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468"} Feb 03 10:46:16 crc kubenswrapper[5010]: I0203 10:46:16.705543 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8mzgl" podStartSLOduration=3.251091633 podStartE2EDuration="5.705511981s" podCreationTimestamp="2026-02-03 10:46:11 +0000 UTC" firstStartedPulling="2026-02-03 10:46:13.589176573 +0000 UTC m=+2643.745152702" lastFinishedPulling="2026-02-03 10:46:16.043596921 +0000 UTC m=+2646.199573050" observedRunningTime="2026-02-03 10:46:16.698612024 +0000 UTC m=+2646.854588163" watchObservedRunningTime="2026-02-03 10:46:16.705511981 +0000 UTC m=+2646.861488120" Feb 03 10:46:22 crc kubenswrapper[5010]: I0203 10:46:22.134339 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:22 crc kubenswrapper[5010]: I0203 10:46:22.134987 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:22 crc kubenswrapper[5010]: I0203 10:46:22.189039 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:22 crc kubenswrapper[5010]: I0203 10:46:22.745003 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:22 crc kubenswrapper[5010]: I0203 10:46:22.804036 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8mzgl"] Feb 03 10:46:24 crc kubenswrapper[5010]: I0203 10:46:24.716916 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8mzgl" podUID="f79efd93-79ed-4459-9345-c203dd95ce20" containerName="registry-server" containerID="cri-o://5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468" gracePeriod=2 Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.171960 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.316632 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pcnv\" (UniqueName: \"kubernetes.io/projected/f79efd93-79ed-4459-9345-c203dd95ce20-kube-api-access-9pcnv\") pod \"f79efd93-79ed-4459-9345-c203dd95ce20\" (UID: \"f79efd93-79ed-4459-9345-c203dd95ce20\") " Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.317037 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79efd93-79ed-4459-9345-c203dd95ce20-catalog-content\") pod \"f79efd93-79ed-4459-9345-c203dd95ce20\" (UID: \"f79efd93-79ed-4459-9345-c203dd95ce20\") " Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.317331 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79efd93-79ed-4459-9345-c203dd95ce20-utilities\") pod \"f79efd93-79ed-4459-9345-c203dd95ce20\" (UID: \"f79efd93-79ed-4459-9345-c203dd95ce20\") " Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.318509 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f79efd93-79ed-4459-9345-c203dd95ce20-utilities" (OuterVolumeSpecName: "utilities") pod "f79efd93-79ed-4459-9345-c203dd95ce20" (UID: "f79efd93-79ed-4459-9345-c203dd95ce20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.325498 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f79efd93-79ed-4459-9345-c203dd95ce20-kube-api-access-9pcnv" (OuterVolumeSpecName: "kube-api-access-9pcnv") pod "f79efd93-79ed-4459-9345-c203dd95ce20" (UID: "f79efd93-79ed-4459-9345-c203dd95ce20"). InnerVolumeSpecName "kube-api-access-9pcnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.419790 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79efd93-79ed-4459-9345-c203dd95ce20-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.421367 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pcnv\" (UniqueName: \"kubernetes.io/projected/f79efd93-79ed-4459-9345-c203dd95ce20-kube-api-access-9pcnv\") on node \"crc\" DevicePath \"\"" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.661591 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f79efd93-79ed-4459-9345-c203dd95ce20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f79efd93-79ed-4459-9345-c203dd95ce20" (UID: "f79efd93-79ed-4459-9345-c203dd95ce20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.727408 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79efd93-79ed-4459-9345-c203dd95ce20-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.733529 5010 generic.go:334] "Generic (PLEG): container finished" podID="f79efd93-79ed-4459-9345-c203dd95ce20" containerID="5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468" exitCode=0 Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.733587 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mzgl" event={"ID":"f79efd93-79ed-4459-9345-c203dd95ce20","Type":"ContainerDied","Data":"5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468"} Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.733599 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.733623 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mzgl" event={"ID":"f79efd93-79ed-4459-9345-c203dd95ce20","Type":"ContainerDied","Data":"863be305abcb465ebca5aea60206885e34000c983fd7bff7e9942058c39d5010"} Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.733649 5010 scope.go:117] "RemoveContainer" containerID="5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.775533 5010 scope.go:117] "RemoveContainer" containerID="98045c0699142a35fd580ee03bbd2be538447b9d3d6388b6e76f2677074cfdb0" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.788766 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8mzgl"] Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.802348 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8mzgl"] Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.805754 5010 scope.go:117] "RemoveContainer" containerID="c562ce2f172268f30d32a1149d741246ef07fbb3b595aefe0237a71dafd6fb85" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.848084 5010 scope.go:117] "RemoveContainer" containerID="5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468" Feb 03 10:46:25 crc kubenswrapper[5010]: E0203 10:46:25.848848 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468\": container with ID starting with 5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468 not found: ID does not exist" containerID="5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.848943 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468"} err="failed to get container status \"5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468\": rpc error: code = NotFound desc = could not find container \"5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468\": container with ID starting with 5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468 not found: ID does not exist" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.849025 5010 scope.go:117] "RemoveContainer" containerID="98045c0699142a35fd580ee03bbd2be538447b9d3d6388b6e76f2677074cfdb0" Feb 03 10:46:25 crc kubenswrapper[5010]: E0203 10:46:25.849484 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98045c0699142a35fd580ee03bbd2be538447b9d3d6388b6e76f2677074cfdb0\": container with ID starting with 98045c0699142a35fd580ee03bbd2be538447b9d3d6388b6e76f2677074cfdb0 not found: ID does not exist" containerID="98045c0699142a35fd580ee03bbd2be538447b9d3d6388b6e76f2677074cfdb0" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.849515 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98045c0699142a35fd580ee03bbd2be538447b9d3d6388b6e76f2677074cfdb0"} err="failed to get container status \"98045c0699142a35fd580ee03bbd2be538447b9d3d6388b6e76f2677074cfdb0\": rpc error: code = NotFound desc = could not find container \"98045c0699142a35fd580ee03bbd2be538447b9d3d6388b6e76f2677074cfdb0\": container with ID starting with 98045c0699142a35fd580ee03bbd2be538447b9d3d6388b6e76f2677074cfdb0 not found: ID does not exist" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.849539 5010 scope.go:117] "RemoveContainer" containerID="c562ce2f172268f30d32a1149d741246ef07fbb3b595aefe0237a71dafd6fb85" Feb 03 10:46:25 crc kubenswrapper[5010]: E0203 10:46:25.849829 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c562ce2f172268f30d32a1149d741246ef07fbb3b595aefe0237a71dafd6fb85\": container with ID starting with c562ce2f172268f30d32a1149d741246ef07fbb3b595aefe0237a71dafd6fb85 not found: ID does not exist" containerID="c562ce2f172268f30d32a1149d741246ef07fbb3b595aefe0237a71dafd6fb85" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.849867 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c562ce2f172268f30d32a1149d741246ef07fbb3b595aefe0237a71dafd6fb85"} err="failed to get container status \"c562ce2f172268f30d32a1149d741246ef07fbb3b595aefe0237a71dafd6fb85\": rpc error: code = NotFound desc = could not find container \"c562ce2f172268f30d32a1149d741246ef07fbb3b595aefe0237a71dafd6fb85\": container with ID starting with c562ce2f172268f30d32a1149d741246ef07fbb3b595aefe0237a71dafd6fb85 not found: ID does not exist" Feb 03 10:46:26 crc kubenswrapper[5010]: I0203 10:46:26.515179 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f79efd93-79ed-4459-9345-c203dd95ce20" path="/var/lib/kubelet/pods/f79efd93-79ed-4459-9345-c203dd95ce20/volumes" Feb 03 10:47:16 crc kubenswrapper[5010]: I0203 10:47:16.392241 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:47:16 crc kubenswrapper[5010]: I0203 10:47:16.393393 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:47:25 crc kubenswrapper[5010]: I0203 10:47:25.334879 5010 generic.go:334] "Generic (PLEG): container finished" podID="6fd37dcf-e81a-491a-a5e1-01a27517d1b4" containerID="b92d5a51c76184465825f539bc982313c7d3a25990aaa74ff31547c87be3d118" exitCode=0 Feb 03 10:47:25 crc kubenswrapper[5010]: I0203 10:47:25.335074 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" event={"ID":"6fd37dcf-e81a-491a-a5e1-01a27517d1b4","Type":"ContainerDied","Data":"b92d5a51c76184465825f539bc982313c7d3a25990aaa74ff31547c87be3d118"} Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.808133 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.877509 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb2f6\" (UniqueName: \"kubernetes.io/projected/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-kube-api-access-wb2f6\") pod \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.877678 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-ssh-key-openstack-edpm-ipam\") pod \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.878409 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-inventory\") pod \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.878479 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-combined-ca-bundle\") pod \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.878527 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-cell1-compute-config-0\") pod \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.878593 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-cell1-compute-config-1\") pod \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.879442 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-extra-config-0\") pod \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.879880 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-migration-ssh-key-1\") pod \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.879920 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-migration-ssh-key-0\") pod \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.892335 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-kube-api-access-wb2f6" (OuterVolumeSpecName: "kube-api-access-wb2f6") pod "6fd37dcf-e81a-491a-a5e1-01a27517d1b4" (UID: "6fd37dcf-e81a-491a-a5e1-01a27517d1b4"). InnerVolumeSpecName "kube-api-access-wb2f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.893007 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "6fd37dcf-e81a-491a-a5e1-01a27517d1b4" (UID: "6fd37dcf-e81a-491a-a5e1-01a27517d1b4"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.908072 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6fd37dcf-e81a-491a-a5e1-01a27517d1b4" (UID: "6fd37dcf-e81a-491a-a5e1-01a27517d1b4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.927760 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "6fd37dcf-e81a-491a-a5e1-01a27517d1b4" (UID: "6fd37dcf-e81a-491a-a5e1-01a27517d1b4"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.929430 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-inventory" (OuterVolumeSpecName: "inventory") pod "6fd37dcf-e81a-491a-a5e1-01a27517d1b4" (UID: "6fd37dcf-e81a-491a-a5e1-01a27517d1b4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.936798 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "6fd37dcf-e81a-491a-a5e1-01a27517d1b4" (UID: "6fd37dcf-e81a-491a-a5e1-01a27517d1b4"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.949882 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "6fd37dcf-e81a-491a-a5e1-01a27517d1b4" (UID: "6fd37dcf-e81a-491a-a5e1-01a27517d1b4"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.952933 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "6fd37dcf-e81a-491a-a5e1-01a27517d1b4" (UID: "6fd37dcf-e81a-491a-a5e1-01a27517d1b4"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.962626 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "6fd37dcf-e81a-491a-a5e1-01a27517d1b4" (UID: "6fd37dcf-e81a-491a-a5e1-01a27517d1b4"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.982726 5010 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.982780 5010 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.982790 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb2f6\" (UniqueName: \"kubernetes.io/projected/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-kube-api-access-wb2f6\") on node \"crc\" DevicePath \"\"" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.982803 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.982817 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.982831 5010 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.982843 5010 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.982855 5010 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.982869 5010 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.358090 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" event={"ID":"6fd37dcf-e81a-491a-a5e1-01a27517d1b4","Type":"ContainerDied","Data":"3bdabc9e7c7a1e119a5dd6eb67d8df00ac4cf05c96ad5b5ff0ff7555b937fc53"} Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.358785 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bdabc9e7c7a1e119a5dd6eb67d8df00ac4cf05c96ad5b5ff0ff7555b937fc53" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.358189 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.477699 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h"] Feb 03 10:47:27 crc kubenswrapper[5010]: E0203 10:47:27.478570 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79efd93-79ed-4459-9345-c203dd95ce20" containerName="extract-content" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.478669 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79efd93-79ed-4459-9345-c203dd95ce20" containerName="extract-content" Feb 03 10:47:27 crc kubenswrapper[5010]: E0203 10:47:27.478756 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79efd93-79ed-4459-9345-c203dd95ce20" containerName="registry-server" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.478823 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79efd93-79ed-4459-9345-c203dd95ce20" containerName="registry-server" Feb 03 10:47:27 crc kubenswrapper[5010]: E0203 10:47:27.478888 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79efd93-79ed-4459-9345-c203dd95ce20" containerName="extract-utilities" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.478954 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79efd93-79ed-4459-9345-c203dd95ce20" containerName="extract-utilities" Feb 03 10:47:27 crc kubenswrapper[5010]: E0203 10:47:27.479011 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd37dcf-e81a-491a-a5e1-01a27517d1b4" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.479064 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd37dcf-e81a-491a-a5e1-01a27517d1b4" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.479351 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd37dcf-e81a-491a-a5e1-01a27517d1b4" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.479444 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="f79efd93-79ed-4459-9345-c203dd95ce20" containerName="registry-server" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.480317 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.483772 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.484179 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.484328 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.484275 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.487255 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.493418 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h"] Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.620647 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.620733 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.620855 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-727r4\" (UniqueName: \"kubernetes.io/projected/7353ead1-b7ae-446c-a262-5a383b1d7e52-kube-api-access-727r4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.620897 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.620973 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.621036 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.621093 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.723791 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.723894 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.723975 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.724034 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.724130 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-727r4\" (UniqueName: \"kubernetes.io/projected/7353ead1-b7ae-446c-a262-5a383b1d7e52-kube-api-access-727r4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.724192 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.724267 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.729018 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.729038 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.729547 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.730094 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.728958 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.743743 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-727r4\" (UniqueName: \"kubernetes.io/projected/7353ead1-b7ae-446c-a262-5a383b1d7e52-kube-api-access-727r4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.743766 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.830035 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:28 crc kubenswrapper[5010]: I0203 10:47:28.239418 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h"] Feb 03 10:47:28 crc kubenswrapper[5010]: I0203 10:47:28.251707 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 10:47:28 crc kubenswrapper[5010]: I0203 10:47:28.370086 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" event={"ID":"7353ead1-b7ae-446c-a262-5a383b1d7e52","Type":"ContainerStarted","Data":"91d0640bf20723aa34494df221748d24f3bd4a04ce7159801cea99aea978bc5e"} Feb 03 10:47:29 crc kubenswrapper[5010]: I0203 10:47:29.383701 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" event={"ID":"7353ead1-b7ae-446c-a262-5a383b1d7e52","Type":"ContainerStarted","Data":"b4880425775fd70bc813079913bbf8f5c4f8f371571355c4da87c44a571b62e6"} Feb 03 10:47:29 crc kubenswrapper[5010]: I0203 10:47:29.417943 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" podStartSLOduration=1.888472446 podStartE2EDuration="2.417913909s" podCreationTimestamp="2026-02-03 10:47:27 +0000 UTC" firstStartedPulling="2026-02-03 10:47:28.251403472 +0000 UTC m=+2718.407379601" lastFinishedPulling="2026-02-03 10:47:28.780844935 +0000 UTC m=+2718.936821064" observedRunningTime="2026-02-03 10:47:29.413547108 +0000 UTC m=+2719.569523237" watchObservedRunningTime="2026-02-03 10:47:29.417913909 +0000 UTC m=+2719.573890038" Feb 03 10:47:46 crc kubenswrapper[5010]: I0203 10:47:46.393379 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:47:46 crc kubenswrapper[5010]: I0203 10:47:46.394026 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:48:16 crc kubenswrapper[5010]: I0203 10:48:16.389925 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:48:16 crc kubenswrapper[5010]: I0203 10:48:16.390664 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:48:16 crc kubenswrapper[5010]: I0203 10:48:16.390745 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:48:16 crc kubenswrapper[5010]: I0203 10:48:16.391396 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b61671ae7473626ed1f7e8bbc62ee5800e0d1f9237e36316dd37140b902ac261"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:48:16 crc kubenswrapper[5010]: I0203 10:48:16.391450 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://b61671ae7473626ed1f7e8bbc62ee5800e0d1f9237e36316dd37140b902ac261" gracePeriod=600 Feb 03 10:48:16 crc kubenswrapper[5010]: I0203 10:48:16.925062 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="b61671ae7473626ed1f7e8bbc62ee5800e0d1f9237e36316dd37140b902ac261" exitCode=0 Feb 03 10:48:16 crc kubenswrapper[5010]: I0203 10:48:16.925165 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"b61671ae7473626ed1f7e8bbc62ee5800e0d1f9237e36316dd37140b902ac261"} Feb 03 10:48:16 crc kubenswrapper[5010]: I0203 10:48:16.925513 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963"} Feb 03 10:48:16 crc kubenswrapper[5010]: I0203 10:48:16.925550 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:49:42 crc kubenswrapper[5010]: I0203 10:49:42.863890 5010 generic.go:334] "Generic (PLEG): container finished" podID="7353ead1-b7ae-446c-a262-5a383b1d7e52" containerID="b4880425775fd70bc813079913bbf8f5c4f8f371571355c4da87c44a571b62e6" exitCode=0 Feb 03 10:49:42 crc kubenswrapper[5010]: I0203 10:49:42.863972 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" event={"ID":"7353ead1-b7ae-446c-a262-5a383b1d7e52","Type":"ContainerDied","Data":"b4880425775fd70bc813079913bbf8f5c4f8f371571355c4da87c44a571b62e6"} Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.306854 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.407532 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-0\") pod \"7353ead1-b7ae-446c-a262-5a383b1d7e52\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.407874 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-telemetry-combined-ca-bundle\") pod \"7353ead1-b7ae-446c-a262-5a383b1d7e52\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.407965 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-727r4\" (UniqueName: \"kubernetes.io/projected/7353ead1-b7ae-446c-a262-5a383b1d7e52-kube-api-access-727r4\") pod \"7353ead1-b7ae-446c-a262-5a383b1d7e52\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.408059 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-inventory\") pod \"7353ead1-b7ae-446c-a262-5a383b1d7e52\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.408119 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-1\") pod \"7353ead1-b7ae-446c-a262-5a383b1d7e52\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.408350 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-2\") pod \"7353ead1-b7ae-446c-a262-5a383b1d7e52\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.408390 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ssh-key-openstack-edpm-ipam\") pod \"7353ead1-b7ae-446c-a262-5a383b1d7e52\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.414713 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "7353ead1-b7ae-446c-a262-5a383b1d7e52" (UID: "7353ead1-b7ae-446c-a262-5a383b1d7e52"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.415525 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7353ead1-b7ae-446c-a262-5a383b1d7e52-kube-api-access-727r4" (OuterVolumeSpecName: "kube-api-access-727r4") pod "7353ead1-b7ae-446c-a262-5a383b1d7e52" (UID: "7353ead1-b7ae-446c-a262-5a383b1d7e52"). InnerVolumeSpecName "kube-api-access-727r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.446996 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7353ead1-b7ae-446c-a262-5a383b1d7e52" (UID: "7353ead1-b7ae-446c-a262-5a383b1d7e52"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.449705 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "7353ead1-b7ae-446c-a262-5a383b1d7e52" (UID: "7353ead1-b7ae-446c-a262-5a383b1d7e52"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.452404 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "7353ead1-b7ae-446c-a262-5a383b1d7e52" (UID: "7353ead1-b7ae-446c-a262-5a383b1d7e52"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.462194 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-inventory" (OuterVolumeSpecName: "inventory") pod "7353ead1-b7ae-446c-a262-5a383b1d7e52" (UID: "7353ead1-b7ae-446c-a262-5a383b1d7e52"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.478880 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "7353ead1-b7ae-446c-a262-5a383b1d7e52" (UID: "7353ead1-b7ae-446c-a262-5a383b1d7e52"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.511194 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.511247 5010 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.511264 5010 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.511276 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.511289 5010 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.511299 5010 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.511316 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-727r4\" (UniqueName: \"kubernetes.io/projected/7353ead1-b7ae-446c-a262-5a383b1d7e52-kube-api-access-727r4\") on node \"crc\" DevicePath \"\"" Feb 03 10:49:45 crc kubenswrapper[5010]: I0203 10:49:45.016886 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" event={"ID":"7353ead1-b7ae-446c-a262-5a383b1d7e52","Type":"ContainerDied","Data":"91d0640bf20723aa34494df221748d24f3bd4a04ce7159801cea99aea978bc5e"} Feb 03 10:49:45 crc kubenswrapper[5010]: I0203 10:49:45.016945 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91d0640bf20723aa34494df221748d24f3bd4a04ce7159801cea99aea978bc5e" Feb 03 10:49:45 crc kubenswrapper[5010]: I0203 10:49:45.017038 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.516155 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qddhr"] Feb 03 10:50:06 crc kubenswrapper[5010]: E0203 10:50:06.517169 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7353ead1-b7ae-446c-a262-5a383b1d7e52" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.517188 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="7353ead1-b7ae-446c-a262-5a383b1d7e52" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.517869 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="7353ead1-b7ae-446c-a262-5a383b1d7e52" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.519673 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.520240 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qddhr"] Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.625926 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050580f3-ed5d-45ed-9fd8-f1c04801481e-catalog-content\") pod \"redhat-operators-qddhr\" (UID: \"050580f3-ed5d-45ed-9fd8-f1c04801481e\") " pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.626071 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcw2d\" (UniqueName: \"kubernetes.io/projected/050580f3-ed5d-45ed-9fd8-f1c04801481e-kube-api-access-vcw2d\") pod \"redhat-operators-qddhr\" (UID: \"050580f3-ed5d-45ed-9fd8-f1c04801481e\") " pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.626120 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050580f3-ed5d-45ed-9fd8-f1c04801481e-utilities\") pod \"redhat-operators-qddhr\" (UID: \"050580f3-ed5d-45ed-9fd8-f1c04801481e\") " pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.728481 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050580f3-ed5d-45ed-9fd8-f1c04801481e-catalog-content\") pod \"redhat-operators-qddhr\" (UID: \"050580f3-ed5d-45ed-9fd8-f1c04801481e\") " pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.728556 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcw2d\" (UniqueName: \"kubernetes.io/projected/050580f3-ed5d-45ed-9fd8-f1c04801481e-kube-api-access-vcw2d\") pod \"redhat-operators-qddhr\" (UID: \"050580f3-ed5d-45ed-9fd8-f1c04801481e\") " pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.728583 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050580f3-ed5d-45ed-9fd8-f1c04801481e-utilities\") pod \"redhat-operators-qddhr\" (UID: \"050580f3-ed5d-45ed-9fd8-f1c04801481e\") " pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.729180 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050580f3-ed5d-45ed-9fd8-f1c04801481e-catalog-content\") pod \"redhat-operators-qddhr\" (UID: \"050580f3-ed5d-45ed-9fd8-f1c04801481e\") " pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.729350 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050580f3-ed5d-45ed-9fd8-f1c04801481e-utilities\") pod \"redhat-operators-qddhr\" (UID: \"050580f3-ed5d-45ed-9fd8-f1c04801481e\") " pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.751068 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcw2d\" (UniqueName: \"kubernetes.io/projected/050580f3-ed5d-45ed-9fd8-f1c04801481e-kube-api-access-vcw2d\") pod \"redhat-operators-qddhr\" (UID: \"050580f3-ed5d-45ed-9fd8-f1c04801481e\") " pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.847363 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:07 crc kubenswrapper[5010]: I0203 10:50:07.377429 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qddhr"] Feb 03 10:50:07 crc kubenswrapper[5010]: W0203 10:50:07.381538 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod050580f3_ed5d_45ed_9fd8_f1c04801481e.slice/crio-9173026a9319e270a4f767d7eb35ba42d9773d7168b4d9cb6580511e85f53807 WatchSource:0}: Error finding container 9173026a9319e270a4f767d7eb35ba42d9773d7168b4d9cb6580511e85f53807: Status 404 returned error can't find the container with id 9173026a9319e270a4f767d7eb35ba42d9773d7168b4d9cb6580511e85f53807 Feb 03 10:50:08 crc kubenswrapper[5010]: I0203 10:50:08.250067 5010 generic.go:334] "Generic (PLEG): container finished" podID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerID="f176807a926ead8616a6d27a2397327c698399d305df569130059169507178c4" exitCode=0 Feb 03 10:50:08 crc kubenswrapper[5010]: I0203 10:50:08.250130 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qddhr" event={"ID":"050580f3-ed5d-45ed-9fd8-f1c04801481e","Type":"ContainerDied","Data":"f176807a926ead8616a6d27a2397327c698399d305df569130059169507178c4"} Feb 03 10:50:08 crc kubenswrapper[5010]: I0203 10:50:08.250497 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qddhr" event={"ID":"050580f3-ed5d-45ed-9fd8-f1c04801481e","Type":"ContainerStarted","Data":"9173026a9319e270a4f767d7eb35ba42d9773d7168b4d9cb6580511e85f53807"} Feb 03 10:50:10 crc kubenswrapper[5010]: I0203 10:50:10.276503 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qddhr" event={"ID":"050580f3-ed5d-45ed-9fd8-f1c04801481e","Type":"ContainerStarted","Data":"8c17376d09d0f29ad79ff99c0b119376ff3c9c02f6cf9abfed976773c74141b0"} Feb 03 10:50:12 crc kubenswrapper[5010]: I0203 10:50:12.303915 5010 generic.go:334] "Generic (PLEG): container finished" podID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerID="8c17376d09d0f29ad79ff99c0b119376ff3c9c02f6cf9abfed976773c74141b0" exitCode=0 Feb 03 10:50:12 crc kubenswrapper[5010]: I0203 10:50:12.304035 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qddhr" event={"ID":"050580f3-ed5d-45ed-9fd8-f1c04801481e","Type":"ContainerDied","Data":"8c17376d09d0f29ad79ff99c0b119376ff3c9c02f6cf9abfed976773c74141b0"} Feb 03 10:50:13 crc kubenswrapper[5010]: I0203 10:50:13.318489 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qddhr" event={"ID":"050580f3-ed5d-45ed-9fd8-f1c04801481e","Type":"ContainerStarted","Data":"85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea"} Feb 03 10:50:13 crc kubenswrapper[5010]: I0203 10:50:13.351872 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qddhr" podStartSLOduration=2.502901355 podStartE2EDuration="7.35184595s" podCreationTimestamp="2026-02-03 10:50:06 +0000 UTC" firstStartedPulling="2026-02-03 10:50:08.254910038 +0000 UTC m=+2878.410886167" lastFinishedPulling="2026-02-03 10:50:13.103854633 +0000 UTC m=+2883.259830762" observedRunningTime="2026-02-03 10:50:13.346924684 +0000 UTC m=+2883.502900813" watchObservedRunningTime="2026-02-03 10:50:13.35184595 +0000 UTC m=+2883.507822079" Feb 03 10:50:16 crc kubenswrapper[5010]: I0203 10:50:16.390342 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:50:16 crc kubenswrapper[5010]: I0203 10:50:16.391155 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:50:16 crc kubenswrapper[5010]: I0203 10:50:16.847979 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:16 crc kubenswrapper[5010]: I0203 10:50:16.848401 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:17 crc kubenswrapper[5010]: I0203 10:50:17.908973 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qddhr" podUID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerName="registry-server" probeResult="failure" output=< Feb 03 10:50:17 crc kubenswrapper[5010]: timeout: failed to connect service ":50051" within 1s Feb 03 10:50:17 crc kubenswrapper[5010]: > Feb 03 10:50:27 crc kubenswrapper[5010]: I0203 10:50:27.899529 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qddhr" podUID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerName="registry-server" probeResult="failure" output=< Feb 03 10:50:27 crc kubenswrapper[5010]: timeout: failed to connect service ":50051" within 1s Feb 03 10:50:27 crc kubenswrapper[5010]: > Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.393319 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.395522 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.398380 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.400479 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.401132 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.402924 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-sbxfw" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.409763 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.449310 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.449370 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-config-data\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.449560 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.551856 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.551923 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.551965 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45sks\" (UniqueName: \"kubernetes.io/projected/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-kube-api-access-45sks\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.552006 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.552068 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.552386 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.552589 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.552634 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-config-data\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.552731 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.553291 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.554255 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-config-data\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.560194 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.655695 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.656308 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.656365 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45sks\" (UniqueName: \"kubernetes.io/projected/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-kube-api-access-45sks\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.656429 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.656556 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.656706 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.657273 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.657863 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.658144 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.662320 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.666996 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.679851 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45sks\" (UniqueName: \"kubernetes.io/projected/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-kube-api-access-45sks\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.689000 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.716585 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 03 10:50:31 crc kubenswrapper[5010]: I0203 10:50:31.235903 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 03 10:50:31 crc kubenswrapper[5010]: I0203 10:50:31.569571 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"8c8d92ab-5652-4bd9-81af-fd0be7aea36f","Type":"ContainerStarted","Data":"08d3852b3365aa6563a9026a76a312565c0566fd0792c861c656faa1a56176fa"} Feb 03 10:50:36 crc kubenswrapper[5010]: I0203 10:50:36.905940 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:36 crc kubenswrapper[5010]: I0203 10:50:36.962930 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:37 crc kubenswrapper[5010]: I0203 10:50:37.711845 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qddhr"] Feb 03 10:50:38 crc kubenswrapper[5010]: I0203 10:50:38.704181 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qddhr" podUID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerName="registry-server" containerID="cri-o://85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea" gracePeriod=2 Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.273638 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.379394 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcw2d\" (UniqueName: \"kubernetes.io/projected/050580f3-ed5d-45ed-9fd8-f1c04801481e-kube-api-access-vcw2d\") pod \"050580f3-ed5d-45ed-9fd8-f1c04801481e\" (UID: \"050580f3-ed5d-45ed-9fd8-f1c04801481e\") " Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.379768 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050580f3-ed5d-45ed-9fd8-f1c04801481e-utilities\") pod \"050580f3-ed5d-45ed-9fd8-f1c04801481e\" (UID: \"050580f3-ed5d-45ed-9fd8-f1c04801481e\") " Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.379818 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050580f3-ed5d-45ed-9fd8-f1c04801481e-catalog-content\") pod \"050580f3-ed5d-45ed-9fd8-f1c04801481e\" (UID: \"050580f3-ed5d-45ed-9fd8-f1c04801481e\") " Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.380658 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/050580f3-ed5d-45ed-9fd8-f1c04801481e-utilities" (OuterVolumeSpecName: "utilities") pod "050580f3-ed5d-45ed-9fd8-f1c04801481e" (UID: "050580f3-ed5d-45ed-9fd8-f1c04801481e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.389714 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050580f3-ed5d-45ed-9fd8-f1c04801481e-kube-api-access-vcw2d" (OuterVolumeSpecName: "kube-api-access-vcw2d") pod "050580f3-ed5d-45ed-9fd8-f1c04801481e" (UID: "050580f3-ed5d-45ed-9fd8-f1c04801481e"). InnerVolumeSpecName "kube-api-access-vcw2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.486759 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050580f3-ed5d-45ed-9fd8-f1c04801481e-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.486807 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcw2d\" (UniqueName: \"kubernetes.io/projected/050580f3-ed5d-45ed-9fd8-f1c04801481e-kube-api-access-vcw2d\") on node \"crc\" DevicePath \"\"" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.509564 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/050580f3-ed5d-45ed-9fd8-f1c04801481e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "050580f3-ed5d-45ed-9fd8-f1c04801481e" (UID: "050580f3-ed5d-45ed-9fd8-f1c04801481e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.589006 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050580f3-ed5d-45ed-9fd8-f1c04801481e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.720101 5010 generic.go:334] "Generic (PLEG): container finished" podID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerID="85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea" exitCode=0 Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.720186 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qddhr" event={"ID":"050580f3-ed5d-45ed-9fd8-f1c04801481e","Type":"ContainerDied","Data":"85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea"} Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.720256 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.720284 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qddhr" event={"ID":"050580f3-ed5d-45ed-9fd8-f1c04801481e","Type":"ContainerDied","Data":"9173026a9319e270a4f767d7eb35ba42d9773d7168b4d9cb6580511e85f53807"} Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.720331 5010 scope.go:117] "RemoveContainer" containerID="85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.749918 5010 scope.go:117] "RemoveContainer" containerID="8c17376d09d0f29ad79ff99c0b119376ff3c9c02f6cf9abfed976773c74141b0" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.776252 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qddhr"] Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.787371 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qddhr"] Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.803996 5010 scope.go:117] "RemoveContainer" containerID="f176807a926ead8616a6d27a2397327c698399d305df569130059169507178c4" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.846182 5010 scope.go:117] "RemoveContainer" containerID="85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea" Feb 03 10:50:39 crc kubenswrapper[5010]: E0203 10:50:39.847302 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea\": container with ID starting with 85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea not found: ID does not exist" containerID="85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.847367 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea"} err="failed to get container status \"85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea\": rpc error: code = NotFound desc = could not find container \"85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea\": container with ID starting with 85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea not found: ID does not exist" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.847411 5010 scope.go:117] "RemoveContainer" containerID="8c17376d09d0f29ad79ff99c0b119376ff3c9c02f6cf9abfed976773c74141b0" Feb 03 10:50:39 crc kubenswrapper[5010]: E0203 10:50:39.847834 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c17376d09d0f29ad79ff99c0b119376ff3c9c02f6cf9abfed976773c74141b0\": container with ID starting with 8c17376d09d0f29ad79ff99c0b119376ff3c9c02f6cf9abfed976773c74141b0 not found: ID does not exist" containerID="8c17376d09d0f29ad79ff99c0b119376ff3c9c02f6cf9abfed976773c74141b0" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.847859 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c17376d09d0f29ad79ff99c0b119376ff3c9c02f6cf9abfed976773c74141b0"} err="failed to get container status \"8c17376d09d0f29ad79ff99c0b119376ff3c9c02f6cf9abfed976773c74141b0\": rpc error: code = NotFound desc = could not find container \"8c17376d09d0f29ad79ff99c0b119376ff3c9c02f6cf9abfed976773c74141b0\": container with ID starting with 8c17376d09d0f29ad79ff99c0b119376ff3c9c02f6cf9abfed976773c74141b0 not found: ID does not exist" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.847874 5010 scope.go:117] "RemoveContainer" containerID="f176807a926ead8616a6d27a2397327c698399d305df569130059169507178c4" Feb 03 10:50:39 crc kubenswrapper[5010]: E0203 10:50:39.848375 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f176807a926ead8616a6d27a2397327c698399d305df569130059169507178c4\": container with ID starting with f176807a926ead8616a6d27a2397327c698399d305df569130059169507178c4 not found: ID does not exist" containerID="f176807a926ead8616a6d27a2397327c698399d305df569130059169507178c4" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.848415 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f176807a926ead8616a6d27a2397327c698399d305df569130059169507178c4"} err="failed to get container status \"f176807a926ead8616a6d27a2397327c698399d305df569130059169507178c4\": rpc error: code = NotFound desc = could not find container \"f176807a926ead8616a6d27a2397327c698399d305df569130059169507178c4\": container with ID starting with f176807a926ead8616a6d27a2397327c698399d305df569130059169507178c4 not found: ID does not exist" Feb 03 10:50:40 crc kubenswrapper[5010]: I0203 10:50:40.530639 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="050580f3-ed5d-45ed-9fd8-f1c04801481e" path="/var/lib/kubelet/pods/050580f3-ed5d-45ed-9fd8-f1c04801481e/volumes" Feb 03 10:50:46 crc kubenswrapper[5010]: I0203 10:50:46.390467 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:50:46 crc kubenswrapper[5010]: I0203 10:50:46.391184 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:51:06 crc kubenswrapper[5010]: E0203 10:51:06.449847 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 03 10:51:06 crc kubenswrapper[5010]: E0203 10:51:06.452800 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45sks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(8c8d92ab-5652-4bd9-81af-fd0be7aea36f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:51:06 crc kubenswrapper[5010]: E0203 10:51:06.454130 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="8c8d92ab-5652-4bd9-81af-fd0be7aea36f" Feb 03 10:51:07 crc kubenswrapper[5010]: E0203 10:51:07.044124 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="8c8d92ab-5652-4bd9-81af-fd0be7aea36f" Feb 03 10:51:16 crc kubenswrapper[5010]: I0203 10:51:16.390508 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:51:16 crc kubenswrapper[5010]: I0203 10:51:16.392893 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:51:16 crc kubenswrapper[5010]: I0203 10:51:16.393078 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:51:16 crc kubenswrapper[5010]: I0203 10:51:16.394166 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:51:16 crc kubenswrapper[5010]: I0203 10:51:16.394352 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" gracePeriod=600 Feb 03 10:51:16 crc kubenswrapper[5010]: E0203 10:51:16.534902 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:51:17 crc kubenswrapper[5010]: I0203 10:51:17.153176 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" exitCode=0 Feb 03 10:51:17 crc kubenswrapper[5010]: I0203 10:51:17.153260 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963"} Feb 03 10:51:17 crc kubenswrapper[5010]: I0203 10:51:17.153367 5010 scope.go:117] "RemoveContainer" containerID="b61671ae7473626ed1f7e8bbc62ee5800e0d1f9237e36316dd37140b902ac261" Feb 03 10:51:17 crc kubenswrapper[5010]: I0203 10:51:17.154583 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:51:17 crc kubenswrapper[5010]: E0203 10:51:17.155084 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:51:18 crc kubenswrapper[5010]: I0203 10:51:18.967156 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 03 10:51:20 crc kubenswrapper[5010]: I0203 10:51:20.191338 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"8c8d92ab-5652-4bd9-81af-fd0be7aea36f","Type":"ContainerStarted","Data":"1dceb12710efc42bf7d1bc8254652d746deec954467b49662ae6e52ac9ca2747"} Feb 03 10:51:20 crc kubenswrapper[5010]: I0203 10:51:20.221023 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.503850372 podStartE2EDuration="51.22099766s" podCreationTimestamp="2026-02-03 10:50:29 +0000 UTC" firstStartedPulling="2026-02-03 10:50:31.245966243 +0000 UTC m=+2901.401942372" lastFinishedPulling="2026-02-03 10:51:18.963113541 +0000 UTC m=+2949.119089660" observedRunningTime="2026-02-03 10:51:20.213581351 +0000 UTC m=+2950.369557480" watchObservedRunningTime="2026-02-03 10:51:20.22099766 +0000 UTC m=+2950.376973789" Feb 03 10:51:29 crc kubenswrapper[5010]: I0203 10:51:29.503632 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:51:29 crc kubenswrapper[5010]: E0203 10:51:29.504706 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:51:43 crc kubenswrapper[5010]: I0203 10:51:43.503080 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:51:43 crc kubenswrapper[5010]: E0203 10:51:43.504164 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:51:58 crc kubenswrapper[5010]: I0203 10:51:58.512450 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:51:58 crc kubenswrapper[5010]: E0203 10:51:58.515067 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:52:13 crc kubenswrapper[5010]: I0203 10:52:13.503036 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:52:13 crc kubenswrapper[5010]: E0203 10:52:13.504243 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:52:27 crc kubenswrapper[5010]: I0203 10:52:27.502755 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:52:27 crc kubenswrapper[5010]: E0203 10:52:27.503946 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:52:38 crc kubenswrapper[5010]: I0203 10:52:38.503857 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:52:38 crc kubenswrapper[5010]: E0203 10:52:38.505133 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:52:53 crc kubenswrapper[5010]: I0203 10:52:53.502785 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:52:53 crc kubenswrapper[5010]: E0203 10:52:53.503819 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:53:04 crc kubenswrapper[5010]: I0203 10:53:04.503502 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:53:04 crc kubenswrapper[5010]: E0203 10:53:04.504432 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:53:16 crc kubenswrapper[5010]: I0203 10:53:16.503108 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:53:16 crc kubenswrapper[5010]: E0203 10:53:16.505695 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:53:28 crc kubenswrapper[5010]: I0203 10:53:28.503319 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:53:28 crc kubenswrapper[5010]: E0203 10:53:28.504304 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:53:40 crc kubenswrapper[5010]: I0203 10:53:40.511031 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:53:40 crc kubenswrapper[5010]: E0203 10:53:40.512052 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:53:53 crc kubenswrapper[5010]: I0203 10:53:53.502610 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:53:53 crc kubenswrapper[5010]: E0203 10:53:53.503408 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:54:08 crc kubenswrapper[5010]: I0203 10:54:08.503193 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:54:08 crc kubenswrapper[5010]: E0203 10:54:08.504576 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.474998 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j5vwx"] Feb 03 10:54:17 crc kubenswrapper[5010]: E0203 10:54:17.476251 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerName="extract-utilities" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.476272 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerName="extract-utilities" Feb 03 10:54:17 crc kubenswrapper[5010]: E0203 10:54:17.479406 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerName="registry-server" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.479454 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerName="registry-server" Feb 03 10:54:17 crc kubenswrapper[5010]: E0203 10:54:17.479565 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerName="extract-content" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.479572 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerName="extract-content" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.480048 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerName="registry-server" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.481953 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.491054 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j5vwx"] Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.553539 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75610c94-1855-4f77-a701-8ef81b4d2e50-utilities\") pod \"community-operators-j5vwx\" (UID: \"75610c94-1855-4f77-a701-8ef81b4d2e50\") " pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.554551 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75610c94-1855-4f77-a701-8ef81b4d2e50-catalog-content\") pod \"community-operators-j5vwx\" (UID: \"75610c94-1855-4f77-a701-8ef81b4d2e50\") " pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.554603 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfx4x\" (UniqueName: \"kubernetes.io/projected/75610c94-1855-4f77-a701-8ef81b4d2e50-kube-api-access-gfx4x\") pod \"community-operators-j5vwx\" (UID: \"75610c94-1855-4f77-a701-8ef81b4d2e50\") " pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.657559 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75610c94-1855-4f77-a701-8ef81b4d2e50-catalog-content\") pod \"community-operators-j5vwx\" (UID: \"75610c94-1855-4f77-a701-8ef81b4d2e50\") " pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.657672 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75610c94-1855-4f77-a701-8ef81b4d2e50-catalog-content\") pod \"community-operators-j5vwx\" (UID: \"75610c94-1855-4f77-a701-8ef81b4d2e50\") " pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.657710 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfx4x\" (UniqueName: \"kubernetes.io/projected/75610c94-1855-4f77-a701-8ef81b4d2e50-kube-api-access-gfx4x\") pod \"community-operators-j5vwx\" (UID: \"75610c94-1855-4f77-a701-8ef81b4d2e50\") " pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.657782 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75610c94-1855-4f77-a701-8ef81b4d2e50-utilities\") pod \"community-operators-j5vwx\" (UID: \"75610c94-1855-4f77-a701-8ef81b4d2e50\") " pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.658184 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75610c94-1855-4f77-a701-8ef81b4d2e50-utilities\") pod \"community-operators-j5vwx\" (UID: \"75610c94-1855-4f77-a701-8ef81b4d2e50\") " pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.684755 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfx4x\" (UniqueName: \"kubernetes.io/projected/75610c94-1855-4f77-a701-8ef81b4d2e50-kube-api-access-gfx4x\") pod \"community-operators-j5vwx\" (UID: \"75610c94-1855-4f77-a701-8ef81b4d2e50\") " pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.809155 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:18 crc kubenswrapper[5010]: I0203 10:54:18.478442 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j5vwx"] Feb 03 10:54:19 crc kubenswrapper[5010]: I0203 10:54:19.451514 5010 generic.go:334] "Generic (PLEG): container finished" podID="75610c94-1855-4f77-a701-8ef81b4d2e50" containerID="5a7f9c5e77983464234aae215b30f19eb88eb3fb62c5467f971421f2f81a7ab8" exitCode=0 Feb 03 10:54:19 crc kubenswrapper[5010]: I0203 10:54:19.451661 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5vwx" event={"ID":"75610c94-1855-4f77-a701-8ef81b4d2e50","Type":"ContainerDied","Data":"5a7f9c5e77983464234aae215b30f19eb88eb3fb62c5467f971421f2f81a7ab8"} Feb 03 10:54:19 crc kubenswrapper[5010]: I0203 10:54:19.451882 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5vwx" event={"ID":"75610c94-1855-4f77-a701-8ef81b4d2e50","Type":"ContainerStarted","Data":"e5f41be29933987d2dba0464fb8639b118a5d77c9aa0f590b621c92b5c19e99e"} Feb 03 10:54:19 crc kubenswrapper[5010]: I0203 10:54:19.454361 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 10:54:20 crc kubenswrapper[5010]: I0203 10:54:20.475747 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5vwx" event={"ID":"75610c94-1855-4f77-a701-8ef81b4d2e50","Type":"ContainerStarted","Data":"e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0"} Feb 03 10:54:20 crc kubenswrapper[5010]: E0203 10:54:20.876952 5010 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75610c94_1855_4f77_a701_8ef81b4d2e50.slice/crio-conmon-e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0.scope\": RecentStats: unable to find data in memory cache]" Feb 03 10:54:21 crc kubenswrapper[5010]: I0203 10:54:21.487305 5010 generic.go:334] "Generic (PLEG): container finished" podID="75610c94-1855-4f77-a701-8ef81b4d2e50" containerID="e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0" exitCode=0 Feb 03 10:54:21 crc kubenswrapper[5010]: I0203 10:54:21.487380 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5vwx" event={"ID":"75610c94-1855-4f77-a701-8ef81b4d2e50","Type":"ContainerDied","Data":"e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0"} Feb 03 10:54:22 crc kubenswrapper[5010]: I0203 10:54:22.517040 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5vwx" event={"ID":"75610c94-1855-4f77-a701-8ef81b4d2e50","Type":"ContainerStarted","Data":"301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905"} Feb 03 10:54:22 crc kubenswrapper[5010]: I0203 10:54:22.534790 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j5vwx" podStartSLOduration=3.057911914 podStartE2EDuration="5.534762362s" podCreationTimestamp="2026-02-03 10:54:17 +0000 UTC" firstStartedPulling="2026-02-03 10:54:19.453999524 +0000 UTC m=+3129.609975663" lastFinishedPulling="2026-02-03 10:54:21.930849982 +0000 UTC m=+3132.086826111" observedRunningTime="2026-02-03 10:54:22.531154309 +0000 UTC m=+3132.687130448" watchObservedRunningTime="2026-02-03 10:54:22.534762362 +0000 UTC m=+3132.690738501" Feb 03 10:54:23 crc kubenswrapper[5010]: I0203 10:54:23.502809 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:54:23 crc kubenswrapper[5010]: E0203 10:54:23.503424 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:54:27 crc kubenswrapper[5010]: I0203 10:54:27.809603 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:27 crc kubenswrapper[5010]: I0203 10:54:27.810303 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:27 crc kubenswrapper[5010]: I0203 10:54:27.863362 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:28 crc kubenswrapper[5010]: I0203 10:54:28.915671 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:28 crc kubenswrapper[5010]: I0203 10:54:28.976425 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j5vwx"] Feb 03 10:54:30 crc kubenswrapper[5010]: I0203 10:54:30.881075 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j5vwx" podUID="75610c94-1855-4f77-a701-8ef81b4d2e50" containerName="registry-server" containerID="cri-o://301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905" gracePeriod=2 Feb 03 10:54:31 crc kubenswrapper[5010]: E0203 10:54:31.152827 5010 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75610c94_1855_4f77_a701_8ef81b4d2e50.slice/crio-conmon-301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905.scope\": RecentStats: unable to find data in memory cache]" Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.579761 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.744795 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75610c94-1855-4f77-a701-8ef81b4d2e50-catalog-content\") pod \"75610c94-1855-4f77-a701-8ef81b4d2e50\" (UID: \"75610c94-1855-4f77-a701-8ef81b4d2e50\") " Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.745012 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75610c94-1855-4f77-a701-8ef81b4d2e50-utilities\") pod \"75610c94-1855-4f77-a701-8ef81b4d2e50\" (UID: \"75610c94-1855-4f77-a701-8ef81b4d2e50\") " Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.745109 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfx4x\" (UniqueName: \"kubernetes.io/projected/75610c94-1855-4f77-a701-8ef81b4d2e50-kube-api-access-gfx4x\") pod \"75610c94-1855-4f77-a701-8ef81b4d2e50\" (UID: \"75610c94-1855-4f77-a701-8ef81b4d2e50\") " Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.748571 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75610c94-1855-4f77-a701-8ef81b4d2e50-utilities" (OuterVolumeSpecName: "utilities") pod "75610c94-1855-4f77-a701-8ef81b4d2e50" (UID: "75610c94-1855-4f77-a701-8ef81b4d2e50"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.754343 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75610c94-1855-4f77-a701-8ef81b4d2e50-kube-api-access-gfx4x" (OuterVolumeSpecName: "kube-api-access-gfx4x") pod "75610c94-1855-4f77-a701-8ef81b4d2e50" (UID: "75610c94-1855-4f77-a701-8ef81b4d2e50"). InnerVolumeSpecName "kube-api-access-gfx4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.808325 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75610c94-1855-4f77-a701-8ef81b4d2e50-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75610c94-1855-4f77-a701-8ef81b4d2e50" (UID: "75610c94-1855-4f77-a701-8ef81b4d2e50"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.847935 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75610c94-1855-4f77-a701-8ef81b4d2e50-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.847992 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfx4x\" (UniqueName: \"kubernetes.io/projected/75610c94-1855-4f77-a701-8ef81b4d2e50-kube-api-access-gfx4x\") on node \"crc\" DevicePath \"\"" Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.848013 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75610c94-1855-4f77-a701-8ef81b4d2e50-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.897683 5010 generic.go:334] "Generic (PLEG): container finished" podID="75610c94-1855-4f77-a701-8ef81b4d2e50" containerID="301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905" exitCode=0 Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.897753 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5vwx" event={"ID":"75610c94-1855-4f77-a701-8ef81b4d2e50","Type":"ContainerDied","Data":"301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905"} Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.897798 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5vwx" event={"ID":"75610c94-1855-4f77-a701-8ef81b4d2e50","Type":"ContainerDied","Data":"e5f41be29933987d2dba0464fb8639b118a5d77c9aa0f590b621c92b5c19e99e"} Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.897847 5010 scope.go:117] "RemoveContainer" containerID="301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905" Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.898083 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.947199 5010 scope.go:117] "RemoveContainer" containerID="e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0" Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.957691 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j5vwx"] Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.971410 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j5vwx"] Feb 03 10:54:32 crc kubenswrapper[5010]: I0203 10:54:31.999980 5010 scope.go:117] "RemoveContainer" containerID="5a7f9c5e77983464234aae215b30f19eb88eb3fb62c5467f971421f2f81a7ab8" Feb 03 10:54:32 crc kubenswrapper[5010]: I0203 10:54:32.046288 5010 scope.go:117] "RemoveContainer" containerID="301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905" Feb 03 10:54:32 crc kubenswrapper[5010]: E0203 10:54:32.047479 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905\": container with ID starting with 301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905 not found: ID does not exist" containerID="301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905" Feb 03 10:54:32 crc kubenswrapper[5010]: I0203 10:54:32.047577 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905"} err="failed to get container status \"301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905\": rpc error: code = NotFound desc = could not find container \"301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905\": container with ID starting with 301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905 not found: ID does not exist" Feb 03 10:54:32 crc kubenswrapper[5010]: I0203 10:54:32.047666 5010 scope.go:117] "RemoveContainer" containerID="e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0" Feb 03 10:54:32 crc kubenswrapper[5010]: E0203 10:54:32.048505 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0\": container with ID starting with e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0 not found: ID does not exist" containerID="e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0" Feb 03 10:54:32 crc kubenswrapper[5010]: I0203 10:54:32.048575 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0"} err="failed to get container status \"e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0\": rpc error: code = NotFound desc = could not find container \"e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0\": container with ID starting with e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0 not found: ID does not exist" Feb 03 10:54:32 crc kubenswrapper[5010]: I0203 10:54:32.048615 5010 scope.go:117] "RemoveContainer" containerID="5a7f9c5e77983464234aae215b30f19eb88eb3fb62c5467f971421f2f81a7ab8" Feb 03 10:54:32 crc kubenswrapper[5010]: E0203 10:54:32.049054 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a7f9c5e77983464234aae215b30f19eb88eb3fb62c5467f971421f2f81a7ab8\": container with ID starting with 5a7f9c5e77983464234aae215b30f19eb88eb3fb62c5467f971421f2f81a7ab8 not found: ID does not exist" containerID="5a7f9c5e77983464234aae215b30f19eb88eb3fb62c5467f971421f2f81a7ab8" Feb 03 10:54:32 crc kubenswrapper[5010]: I0203 10:54:32.049104 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a7f9c5e77983464234aae215b30f19eb88eb3fb62c5467f971421f2f81a7ab8"} err="failed to get container status \"5a7f9c5e77983464234aae215b30f19eb88eb3fb62c5467f971421f2f81a7ab8\": rpc error: code = NotFound desc = could not find container \"5a7f9c5e77983464234aae215b30f19eb88eb3fb62c5467f971421f2f81a7ab8\": container with ID starting with 5a7f9c5e77983464234aae215b30f19eb88eb3fb62c5467f971421f2f81a7ab8 not found: ID does not exist" Feb 03 10:54:32 crc kubenswrapper[5010]: I0203 10:54:32.515081 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75610c94-1855-4f77-a701-8ef81b4d2e50" path="/var/lib/kubelet/pods/75610c94-1855-4f77-a701-8ef81b4d2e50/volumes" Feb 03 10:54:37 crc kubenswrapper[5010]: I0203 10:54:37.502894 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:54:37 crc kubenswrapper[5010]: E0203 10:54:37.503781 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:54:50 crc kubenswrapper[5010]: I0203 10:54:50.508765 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:54:50 crc kubenswrapper[5010]: E0203 10:54:50.510103 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:55:01 crc kubenswrapper[5010]: I0203 10:55:01.502635 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:55:01 crc kubenswrapper[5010]: E0203 10:55:01.503573 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.696541 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9f8sv"] Feb 03 10:55:11 crc kubenswrapper[5010]: E0203 10:55:11.697928 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75610c94-1855-4f77-a701-8ef81b4d2e50" containerName="registry-server" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.697947 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="75610c94-1855-4f77-a701-8ef81b4d2e50" containerName="registry-server" Feb 03 10:55:11 crc kubenswrapper[5010]: E0203 10:55:11.697976 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75610c94-1855-4f77-a701-8ef81b4d2e50" containerName="extract-content" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.697982 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="75610c94-1855-4f77-a701-8ef81b4d2e50" containerName="extract-content" Feb 03 10:55:11 crc kubenswrapper[5010]: E0203 10:55:11.697997 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75610c94-1855-4f77-a701-8ef81b4d2e50" containerName="extract-utilities" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.698005 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="75610c94-1855-4f77-a701-8ef81b4d2e50" containerName="extract-utilities" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.698206 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="75610c94-1855-4f77-a701-8ef81b4d2e50" containerName="registry-server" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.700478 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.711330 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9f8sv"] Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.851590 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-catalog-content\") pod \"redhat-marketplace-9f8sv\" (UID: \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\") " pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.851794 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kscxt\" (UniqueName: \"kubernetes.io/projected/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-kube-api-access-kscxt\") pod \"redhat-marketplace-9f8sv\" (UID: \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\") " pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.852423 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-utilities\") pod \"redhat-marketplace-9f8sv\" (UID: \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\") " pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.954768 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kscxt\" (UniqueName: \"kubernetes.io/projected/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-kube-api-access-kscxt\") pod \"redhat-marketplace-9f8sv\" (UID: \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\") " pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.954954 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-utilities\") pod \"redhat-marketplace-9f8sv\" (UID: \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\") " pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.954981 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-catalog-content\") pod \"redhat-marketplace-9f8sv\" (UID: \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\") " pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.955724 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-utilities\") pod \"redhat-marketplace-9f8sv\" (UID: \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\") " pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.955815 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-catalog-content\") pod \"redhat-marketplace-9f8sv\" (UID: \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\") " pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.980621 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kscxt\" (UniqueName: \"kubernetes.io/projected/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-kube-api-access-kscxt\") pod \"redhat-marketplace-9f8sv\" (UID: \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\") " pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:12 crc kubenswrapper[5010]: I0203 10:55:12.036281 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:12 crc kubenswrapper[5010]: I0203 10:55:12.692969 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9f8sv"] Feb 03 10:55:13 crc kubenswrapper[5010]: I0203 10:55:13.432186 5010 generic.go:334] "Generic (PLEG): container finished" podID="a90875cc-2fcf-425f-b55f-f48f0d9a71a8" containerID="f3dca40395832985fc2f0f733968b498192a7cbd17676209dbf42953808936c9" exitCode=0 Feb 03 10:55:13 crc kubenswrapper[5010]: I0203 10:55:13.432299 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f8sv" event={"ID":"a90875cc-2fcf-425f-b55f-f48f0d9a71a8","Type":"ContainerDied","Data":"f3dca40395832985fc2f0f733968b498192a7cbd17676209dbf42953808936c9"} Feb 03 10:55:13 crc kubenswrapper[5010]: I0203 10:55:13.432713 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f8sv" event={"ID":"a90875cc-2fcf-425f-b55f-f48f0d9a71a8","Type":"ContainerStarted","Data":"0aa8a178688868fad4a61fcd06e29546fa595b6c0d9f307f06ce2cf1da409bb6"} Feb 03 10:55:13 crc kubenswrapper[5010]: I0203 10:55:13.502925 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:55:13 crc kubenswrapper[5010]: E0203 10:55:13.503681 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:55:14 crc kubenswrapper[5010]: I0203 10:55:14.447742 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f8sv" event={"ID":"a90875cc-2fcf-425f-b55f-f48f0d9a71a8","Type":"ContainerStarted","Data":"953dfccec4e14c654a4ca0cae9be28032c1d0cf3287d08f22124b1031c0b3461"} Feb 03 10:55:15 crc kubenswrapper[5010]: I0203 10:55:15.732381 5010 generic.go:334] "Generic (PLEG): container finished" podID="a90875cc-2fcf-425f-b55f-f48f0d9a71a8" containerID="953dfccec4e14c654a4ca0cae9be28032c1d0cf3287d08f22124b1031c0b3461" exitCode=0 Feb 03 10:55:15 crc kubenswrapper[5010]: I0203 10:55:15.732903 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f8sv" event={"ID":"a90875cc-2fcf-425f-b55f-f48f0d9a71a8","Type":"ContainerDied","Data":"953dfccec4e14c654a4ca0cae9be28032c1d0cf3287d08f22124b1031c0b3461"} Feb 03 10:55:16 crc kubenswrapper[5010]: I0203 10:55:16.747572 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f8sv" event={"ID":"a90875cc-2fcf-425f-b55f-f48f0d9a71a8","Type":"ContainerStarted","Data":"502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad"} Feb 03 10:55:16 crc kubenswrapper[5010]: I0203 10:55:16.769864 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9f8sv" podStartSLOduration=3.015479514 podStartE2EDuration="5.7698334s" podCreationTimestamp="2026-02-03 10:55:11 +0000 UTC" firstStartedPulling="2026-02-03 10:55:13.43530073 +0000 UTC m=+3183.591276859" lastFinishedPulling="2026-02-03 10:55:16.189654616 +0000 UTC m=+3186.345630745" observedRunningTime="2026-02-03 10:55:16.766841504 +0000 UTC m=+3186.922817633" watchObservedRunningTime="2026-02-03 10:55:16.7698334 +0000 UTC m=+3186.925809529" Feb 03 10:55:22 crc kubenswrapper[5010]: I0203 10:55:22.307692 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:22 crc kubenswrapper[5010]: I0203 10:55:22.326726 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:22 crc kubenswrapper[5010]: I0203 10:55:22.618978 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:22 crc kubenswrapper[5010]: I0203 10:55:22.942937 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:23 crc kubenswrapper[5010]: I0203 10:55:23.006807 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9f8sv"] Feb 03 10:55:24 crc kubenswrapper[5010]: I0203 10:55:24.908278 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9f8sv" podUID="a90875cc-2fcf-425f-b55f-f48f0d9a71a8" containerName="registry-server" containerID="cri-o://502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad" gracePeriod=2 Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.476194 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.491759 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kscxt\" (UniqueName: \"kubernetes.io/projected/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-kube-api-access-kscxt\") pod \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\" (UID: \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\") " Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.491848 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-catalog-content\") pod \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\" (UID: \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\") " Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.491885 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-utilities\") pod \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\" (UID: \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\") " Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.494075 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-utilities" (OuterVolumeSpecName: "utilities") pod "a90875cc-2fcf-425f-b55f-f48f0d9a71a8" (UID: "a90875cc-2fcf-425f-b55f-f48f0d9a71a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.507894 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:55:25 crc kubenswrapper[5010]: E0203 10:55:25.508178 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.515692 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-kube-api-access-kscxt" (OuterVolumeSpecName: "kube-api-access-kscxt") pod "a90875cc-2fcf-425f-b55f-f48f0d9a71a8" (UID: "a90875cc-2fcf-425f-b55f-f48f0d9a71a8"). InnerVolumeSpecName "kube-api-access-kscxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.533751 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a90875cc-2fcf-425f-b55f-f48f0d9a71a8" (UID: "a90875cc-2fcf-425f-b55f-f48f0d9a71a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.597985 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kscxt\" (UniqueName: \"kubernetes.io/projected/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-kube-api-access-kscxt\") on node \"crc\" DevicePath \"\"" Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.598050 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.598065 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.922291 5010 generic.go:334] "Generic (PLEG): container finished" podID="a90875cc-2fcf-425f-b55f-f48f0d9a71a8" containerID="502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad" exitCode=0 Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.922361 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f8sv" event={"ID":"a90875cc-2fcf-425f-b55f-f48f0d9a71a8","Type":"ContainerDied","Data":"502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad"} Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.922405 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f8sv" event={"ID":"a90875cc-2fcf-425f-b55f-f48f0d9a71a8","Type":"ContainerDied","Data":"0aa8a178688868fad4a61fcd06e29546fa595b6c0d9f307f06ce2cf1da409bb6"} Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.922430 5010 scope.go:117] "RemoveContainer" containerID="502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad" Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.922600 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.974807 5010 scope.go:117] "RemoveContainer" containerID="953dfccec4e14c654a4ca0cae9be28032c1d0cf3287d08f22124b1031c0b3461" Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.981018 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9f8sv"] Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.994842 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9f8sv"] Feb 03 10:55:26 crc kubenswrapper[5010]: I0203 10:55:26.023863 5010 scope.go:117] "RemoveContainer" containerID="f3dca40395832985fc2f0f733968b498192a7cbd17676209dbf42953808936c9" Feb 03 10:55:26 crc kubenswrapper[5010]: I0203 10:55:26.062964 5010 scope.go:117] "RemoveContainer" containerID="502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad" Feb 03 10:55:26 crc kubenswrapper[5010]: E0203 10:55:26.063689 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad\": container with ID starting with 502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad not found: ID does not exist" containerID="502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad" Feb 03 10:55:26 crc kubenswrapper[5010]: I0203 10:55:26.063753 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad"} err="failed to get container status \"502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad\": rpc error: code = NotFound desc = could not find container \"502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad\": container with ID starting with 502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad not found: ID does not exist" Feb 03 10:55:26 crc kubenswrapper[5010]: I0203 10:55:26.063789 5010 scope.go:117] "RemoveContainer" containerID="953dfccec4e14c654a4ca0cae9be28032c1d0cf3287d08f22124b1031c0b3461" Feb 03 10:55:26 crc kubenswrapper[5010]: E0203 10:55:26.064625 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"953dfccec4e14c654a4ca0cae9be28032c1d0cf3287d08f22124b1031c0b3461\": container with ID starting with 953dfccec4e14c654a4ca0cae9be28032c1d0cf3287d08f22124b1031c0b3461 not found: ID does not exist" containerID="953dfccec4e14c654a4ca0cae9be28032c1d0cf3287d08f22124b1031c0b3461" Feb 03 10:55:26 crc kubenswrapper[5010]: I0203 10:55:26.064667 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"953dfccec4e14c654a4ca0cae9be28032c1d0cf3287d08f22124b1031c0b3461"} err="failed to get container status \"953dfccec4e14c654a4ca0cae9be28032c1d0cf3287d08f22124b1031c0b3461\": rpc error: code = NotFound desc = could not find container \"953dfccec4e14c654a4ca0cae9be28032c1d0cf3287d08f22124b1031c0b3461\": container with ID starting with 953dfccec4e14c654a4ca0cae9be28032c1d0cf3287d08f22124b1031c0b3461 not found: ID does not exist" Feb 03 10:55:26 crc kubenswrapper[5010]: I0203 10:55:26.064713 5010 scope.go:117] "RemoveContainer" containerID="f3dca40395832985fc2f0f733968b498192a7cbd17676209dbf42953808936c9" Feb 03 10:55:26 crc kubenswrapper[5010]: E0203 10:55:26.065915 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3dca40395832985fc2f0f733968b498192a7cbd17676209dbf42953808936c9\": container with ID starting with f3dca40395832985fc2f0f733968b498192a7cbd17676209dbf42953808936c9 not found: ID does not exist" containerID="f3dca40395832985fc2f0f733968b498192a7cbd17676209dbf42953808936c9" Feb 03 10:55:26 crc kubenswrapper[5010]: I0203 10:55:26.065960 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3dca40395832985fc2f0f733968b498192a7cbd17676209dbf42953808936c9"} err="failed to get container status \"f3dca40395832985fc2f0f733968b498192a7cbd17676209dbf42953808936c9\": rpc error: code = NotFound desc = could not find container \"f3dca40395832985fc2f0f733968b498192a7cbd17676209dbf42953808936c9\": container with ID starting with f3dca40395832985fc2f0f733968b498192a7cbd17676209dbf42953808936c9 not found: ID does not exist" Feb 03 10:55:26 crc kubenswrapper[5010]: I0203 10:55:26.518851 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a90875cc-2fcf-425f-b55f-f48f0d9a71a8" path="/var/lib/kubelet/pods/a90875cc-2fcf-425f-b55f-f48f0d9a71a8/volumes" Feb 03 10:55:40 crc kubenswrapper[5010]: I0203 10:55:40.510319 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:55:40 crc kubenswrapper[5010]: E0203 10:55:40.511305 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:55:54 crc kubenswrapper[5010]: I0203 10:55:54.502832 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:55:54 crc kubenswrapper[5010]: E0203 10:55:54.503874 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:56:07 crc kubenswrapper[5010]: I0203 10:56:07.503004 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:56:07 crc kubenswrapper[5010]: E0203 10:56:07.504286 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:56:22 crc kubenswrapper[5010]: I0203 10:56:22.503329 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:56:23 crc kubenswrapper[5010]: I0203 10:56:23.544680 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"954ea60c6e1c907175e18b080d65b7e14b322101b2585bb6251035ace6752460"} Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.451639 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6xxhp"] Feb 03 10:57:02 crc kubenswrapper[5010]: E0203 10:57:02.453010 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a90875cc-2fcf-425f-b55f-f48f0d9a71a8" containerName="extract-utilities" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.453036 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a90875cc-2fcf-425f-b55f-f48f0d9a71a8" containerName="extract-utilities" Feb 03 10:57:02 crc kubenswrapper[5010]: E0203 10:57:02.453097 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a90875cc-2fcf-425f-b55f-f48f0d9a71a8" containerName="extract-content" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.453108 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a90875cc-2fcf-425f-b55f-f48f0d9a71a8" containerName="extract-content" Feb 03 10:57:02 crc kubenswrapper[5010]: E0203 10:57:02.453126 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a90875cc-2fcf-425f-b55f-f48f0d9a71a8" containerName="registry-server" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.453135 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a90875cc-2fcf-425f-b55f-f48f0d9a71a8" containerName="registry-server" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.453456 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="a90875cc-2fcf-425f-b55f-f48f0d9a71a8" containerName="registry-server" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.455739 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.471557 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xxhp"] Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.553609 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-utilities\") pod \"certified-operators-6xxhp\" (UID: \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\") " pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.554015 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-catalog-content\") pod \"certified-operators-6xxhp\" (UID: \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\") " pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.554353 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6zrc\" (UniqueName: \"kubernetes.io/projected/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-kube-api-access-c6zrc\") pod \"certified-operators-6xxhp\" (UID: \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\") " pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.656732 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-utilities\") pod \"certified-operators-6xxhp\" (UID: \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\") " pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.656816 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-catalog-content\") pod \"certified-operators-6xxhp\" (UID: \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\") " pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.656989 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6zrc\" (UniqueName: \"kubernetes.io/projected/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-kube-api-access-c6zrc\") pod \"certified-operators-6xxhp\" (UID: \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\") " pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.657765 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-utilities\") pod \"certified-operators-6xxhp\" (UID: \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\") " pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.658088 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-catalog-content\") pod \"certified-operators-6xxhp\" (UID: \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\") " pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.703266 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6zrc\" (UniqueName: \"kubernetes.io/projected/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-kube-api-access-c6zrc\") pod \"certified-operators-6xxhp\" (UID: \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\") " pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.783038 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:03 crc kubenswrapper[5010]: I0203 10:57:03.401320 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xxhp"] Feb 03 10:57:04 crc kubenswrapper[5010]: I0203 10:57:04.069095 5010 generic.go:334] "Generic (PLEG): container finished" podID="c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" containerID="258262cb8d5c0b00f873f30a1ddc931ca92428b326f5eb4dee8490bfcfe07b68" exitCode=0 Feb 03 10:57:04 crc kubenswrapper[5010]: I0203 10:57:04.069159 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xxhp" event={"ID":"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8","Type":"ContainerDied","Data":"258262cb8d5c0b00f873f30a1ddc931ca92428b326f5eb4dee8490bfcfe07b68"} Feb 03 10:57:04 crc kubenswrapper[5010]: I0203 10:57:04.069204 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xxhp" event={"ID":"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8","Type":"ContainerStarted","Data":"a23ecdd349e9698971d6cc0130e941bb3dc5225a38df40061445a094d26767a2"} Feb 03 10:57:06 crc kubenswrapper[5010]: I0203 10:57:06.097189 5010 generic.go:334] "Generic (PLEG): container finished" podID="c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" containerID="ed74b123d8869c4de05317cf924dfb73a4c070f0dc216d2e13a741f3378b5d18" exitCode=0 Feb 03 10:57:06 crc kubenswrapper[5010]: I0203 10:57:06.097268 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xxhp" event={"ID":"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8","Type":"ContainerDied","Data":"ed74b123d8869c4de05317cf924dfb73a4c070f0dc216d2e13a741f3378b5d18"} Feb 03 10:57:07 crc kubenswrapper[5010]: I0203 10:57:07.112085 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xxhp" event={"ID":"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8","Type":"ContainerStarted","Data":"e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149"} Feb 03 10:57:07 crc kubenswrapper[5010]: I0203 10:57:07.139169 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6xxhp" podStartSLOduration=2.658385303 podStartE2EDuration="5.13914006s" podCreationTimestamp="2026-02-03 10:57:02 +0000 UTC" firstStartedPulling="2026-02-03 10:57:04.073783557 +0000 UTC m=+3294.229759686" lastFinishedPulling="2026-02-03 10:57:06.554538314 +0000 UTC m=+3296.710514443" observedRunningTime="2026-02-03 10:57:07.134120212 +0000 UTC m=+3297.290096341" watchObservedRunningTime="2026-02-03 10:57:07.13914006 +0000 UTC m=+3297.295116189" Feb 03 10:57:12 crc kubenswrapper[5010]: I0203 10:57:12.784028 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:12 crc kubenswrapper[5010]: I0203 10:57:12.784955 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:12 crc kubenswrapper[5010]: I0203 10:57:12.840252 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:13 crc kubenswrapper[5010]: I0203 10:57:13.219970 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:13 crc kubenswrapper[5010]: I0203 10:57:13.281874 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6xxhp"] Feb 03 10:57:15 crc kubenswrapper[5010]: I0203 10:57:15.199937 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6xxhp" podUID="c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" containerName="registry-server" containerID="cri-o://e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149" gracePeriod=2 Feb 03 10:57:15 crc kubenswrapper[5010]: I0203 10:57:15.713910 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:15 crc kubenswrapper[5010]: I0203 10:57:15.804856 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-catalog-content\") pod \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\" (UID: \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\") " Feb 03 10:57:15 crc kubenswrapper[5010]: I0203 10:57:15.804918 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6zrc\" (UniqueName: \"kubernetes.io/projected/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-kube-api-access-c6zrc\") pod \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\" (UID: \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\") " Feb 03 10:57:15 crc kubenswrapper[5010]: I0203 10:57:15.804984 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-utilities\") pod \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\" (UID: \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\") " Feb 03 10:57:15 crc kubenswrapper[5010]: I0203 10:57:15.807946 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-utilities" (OuterVolumeSpecName: "utilities") pod "c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" (UID: "c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:57:15 crc kubenswrapper[5010]: I0203 10:57:15.814326 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-kube-api-access-c6zrc" (OuterVolumeSpecName: "kube-api-access-c6zrc") pod "c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" (UID: "c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8"). InnerVolumeSpecName "kube-api-access-c6zrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:57:15 crc kubenswrapper[5010]: I0203 10:57:15.867901 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" (UID: "c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:57:15 crc kubenswrapper[5010]: I0203 10:57:15.907322 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:57:15 crc kubenswrapper[5010]: I0203 10:57:15.907710 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6zrc\" (UniqueName: \"kubernetes.io/projected/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-kube-api-access-c6zrc\") on node \"crc\" DevicePath \"\"" Feb 03 10:57:15 crc kubenswrapper[5010]: I0203 10:57:15.907791 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.221182 5010 generic.go:334] "Generic (PLEG): container finished" podID="c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" containerID="e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149" exitCode=0 Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.221306 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.222494 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xxhp" event={"ID":"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8","Type":"ContainerDied","Data":"e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149"} Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.222621 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xxhp" event={"ID":"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8","Type":"ContainerDied","Data":"a23ecdd349e9698971d6cc0130e941bb3dc5225a38df40061445a094d26767a2"} Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.222724 5010 scope.go:117] "RemoveContainer" containerID="e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149" Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.300417 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6xxhp"] Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.329359 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6xxhp"] Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.355547 5010 scope.go:117] "RemoveContainer" containerID="ed74b123d8869c4de05317cf924dfb73a4c070f0dc216d2e13a741f3378b5d18" Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.399238 5010 scope.go:117] "RemoveContainer" containerID="258262cb8d5c0b00f873f30a1ddc931ca92428b326f5eb4dee8490bfcfe07b68" Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.458375 5010 scope.go:117] "RemoveContainer" containerID="e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149" Feb 03 10:57:16 crc kubenswrapper[5010]: E0203 10:57:16.459898 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149\": container with ID starting with e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149 not found: ID does not exist" containerID="e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149" Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.459959 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149"} err="failed to get container status \"e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149\": rpc error: code = NotFound desc = could not find container \"e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149\": container with ID starting with e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149 not found: ID does not exist" Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.459996 5010 scope.go:117] "RemoveContainer" containerID="ed74b123d8869c4de05317cf924dfb73a4c070f0dc216d2e13a741f3378b5d18" Feb 03 10:57:16 crc kubenswrapper[5010]: E0203 10:57:16.460612 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed74b123d8869c4de05317cf924dfb73a4c070f0dc216d2e13a741f3378b5d18\": container with ID starting with ed74b123d8869c4de05317cf924dfb73a4c070f0dc216d2e13a741f3378b5d18 not found: ID does not exist" containerID="ed74b123d8869c4de05317cf924dfb73a4c070f0dc216d2e13a741f3378b5d18" Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.460665 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed74b123d8869c4de05317cf924dfb73a4c070f0dc216d2e13a741f3378b5d18"} err="failed to get container status \"ed74b123d8869c4de05317cf924dfb73a4c070f0dc216d2e13a741f3378b5d18\": rpc error: code = NotFound desc = could not find container \"ed74b123d8869c4de05317cf924dfb73a4c070f0dc216d2e13a741f3378b5d18\": container with ID starting with ed74b123d8869c4de05317cf924dfb73a4c070f0dc216d2e13a741f3378b5d18 not found: ID does not exist" Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.460706 5010 scope.go:117] "RemoveContainer" containerID="258262cb8d5c0b00f873f30a1ddc931ca92428b326f5eb4dee8490bfcfe07b68" Feb 03 10:57:16 crc kubenswrapper[5010]: E0203 10:57:16.462476 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"258262cb8d5c0b00f873f30a1ddc931ca92428b326f5eb4dee8490bfcfe07b68\": container with ID starting with 258262cb8d5c0b00f873f30a1ddc931ca92428b326f5eb4dee8490bfcfe07b68 not found: ID does not exist" containerID="258262cb8d5c0b00f873f30a1ddc931ca92428b326f5eb4dee8490bfcfe07b68" Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.462545 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"258262cb8d5c0b00f873f30a1ddc931ca92428b326f5eb4dee8490bfcfe07b68"} err="failed to get container status \"258262cb8d5c0b00f873f30a1ddc931ca92428b326f5eb4dee8490bfcfe07b68\": rpc error: code = NotFound desc = could not find container \"258262cb8d5c0b00f873f30a1ddc931ca92428b326f5eb4dee8490bfcfe07b68\": container with ID starting with 258262cb8d5c0b00f873f30a1ddc931ca92428b326f5eb4dee8490bfcfe07b68 not found: ID does not exist" Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.515420 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" path="/var/lib/kubelet/pods/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8/volumes" Feb 03 10:58:46 crc kubenswrapper[5010]: I0203 10:58:46.390736 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:58:46 crc kubenswrapper[5010]: I0203 10:58:46.391627 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:59:16 crc kubenswrapper[5010]: I0203 10:59:16.390354 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:59:16 crc kubenswrapper[5010]: I0203 10:59:16.390813 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:59:46 crc kubenswrapper[5010]: I0203 10:59:46.392283 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:59:46 crc kubenswrapper[5010]: I0203 10:59:46.393244 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:59:46 crc kubenswrapper[5010]: I0203 10:59:46.393313 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:59:46 crc kubenswrapper[5010]: I0203 10:59:46.394689 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"954ea60c6e1c907175e18b080d65b7e14b322101b2585bb6251035ace6752460"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:59:46 crc kubenswrapper[5010]: I0203 10:59:46.394835 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://954ea60c6e1c907175e18b080d65b7e14b322101b2585bb6251035ace6752460" gracePeriod=600 Feb 03 10:59:46 crc kubenswrapper[5010]: I0203 10:59:46.865532 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="954ea60c6e1c907175e18b080d65b7e14b322101b2585bb6251035ace6752460" exitCode=0 Feb 03 10:59:46 crc kubenswrapper[5010]: I0203 10:59:46.866063 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"954ea60c6e1c907175e18b080d65b7e14b322101b2585bb6251035ace6752460"} Feb 03 10:59:46 crc kubenswrapper[5010]: I0203 10:59:46.866179 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:59:47 crc kubenswrapper[5010]: I0203 10:59:47.883290 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af"} Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.156397 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2"] Feb 03 11:00:00 crc kubenswrapper[5010]: E0203 11:00:00.157438 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" containerName="extract-utilities" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.157459 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" containerName="extract-utilities" Feb 03 11:00:00 crc kubenswrapper[5010]: E0203 11:00:00.157488 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" containerName="extract-content" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.157498 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" containerName="extract-content" Feb 03 11:00:00 crc kubenswrapper[5010]: E0203 11:00:00.157517 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" containerName="registry-server" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.157526 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" containerName="registry-server" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.157786 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" containerName="registry-server" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.159061 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.162397 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.173577 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.177121 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2"] Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.256540 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxps6\" (UniqueName: \"kubernetes.io/projected/b32288df-fb1b-4b63-b699-4eabdb2a0cea-kube-api-access-mxps6\") pod \"collect-profiles-29501940-ph7b2\" (UID: \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.256622 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b32288df-fb1b-4b63-b699-4eabdb2a0cea-secret-volume\") pod \"collect-profiles-29501940-ph7b2\" (UID: \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.256731 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b32288df-fb1b-4b63-b699-4eabdb2a0cea-config-volume\") pod \"collect-profiles-29501940-ph7b2\" (UID: \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.358519 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b32288df-fb1b-4b63-b699-4eabdb2a0cea-secret-volume\") pod \"collect-profiles-29501940-ph7b2\" (UID: \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.358657 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b32288df-fb1b-4b63-b699-4eabdb2a0cea-config-volume\") pod \"collect-profiles-29501940-ph7b2\" (UID: \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.358808 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxps6\" (UniqueName: \"kubernetes.io/projected/b32288df-fb1b-4b63-b699-4eabdb2a0cea-kube-api-access-mxps6\") pod \"collect-profiles-29501940-ph7b2\" (UID: \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.360749 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b32288df-fb1b-4b63-b699-4eabdb2a0cea-config-volume\") pod \"collect-profiles-29501940-ph7b2\" (UID: \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.368100 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b32288df-fb1b-4b63-b699-4eabdb2a0cea-secret-volume\") pod \"collect-profiles-29501940-ph7b2\" (UID: \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.385732 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxps6\" (UniqueName: \"kubernetes.io/projected/b32288df-fb1b-4b63-b699-4eabdb2a0cea-kube-api-access-mxps6\") pod \"collect-profiles-29501940-ph7b2\" (UID: \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.489305 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:01 crc kubenswrapper[5010]: I0203 11:00:01.026028 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2"] Feb 03 11:00:02 crc kubenswrapper[5010]: I0203 11:00:02.108111 5010 generic.go:334] "Generic (PLEG): container finished" podID="b32288df-fb1b-4b63-b699-4eabdb2a0cea" containerID="33926290be86ca315743ea2dbeb58bb25d2755270bd9efcd12f13f2ea74329cd" exitCode=0 Feb 03 11:00:02 crc kubenswrapper[5010]: I0203 11:00:02.108972 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" event={"ID":"b32288df-fb1b-4b63-b699-4eabdb2a0cea","Type":"ContainerDied","Data":"33926290be86ca315743ea2dbeb58bb25d2755270bd9efcd12f13f2ea74329cd"} Feb 03 11:00:02 crc kubenswrapper[5010]: I0203 11:00:02.109010 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" event={"ID":"b32288df-fb1b-4b63-b699-4eabdb2a0cea","Type":"ContainerStarted","Data":"54d395f87e6be00632a11eb7daac0e9668f4044743e9f913e46a8cde154d6a6c"} Feb 03 11:00:03 crc kubenswrapper[5010]: I0203 11:00:03.998873 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:04 crc kubenswrapper[5010]: I0203 11:00:04.134953 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" event={"ID":"b32288df-fb1b-4b63-b699-4eabdb2a0cea","Type":"ContainerDied","Data":"54d395f87e6be00632a11eb7daac0e9668f4044743e9f913e46a8cde154d6a6c"} Feb 03 11:00:04 crc kubenswrapper[5010]: I0203 11:00:04.135019 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54d395f87e6be00632a11eb7daac0e9668f4044743e9f913e46a8cde154d6a6c" Feb 03 11:00:04 crc kubenswrapper[5010]: I0203 11:00:04.135068 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:04 crc kubenswrapper[5010]: I0203 11:00:04.187537 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b32288df-fb1b-4b63-b699-4eabdb2a0cea-config-volume\") pod \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\" (UID: \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\") " Feb 03 11:00:04 crc kubenswrapper[5010]: I0203 11:00:04.188247 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxps6\" (UniqueName: \"kubernetes.io/projected/b32288df-fb1b-4b63-b699-4eabdb2a0cea-kube-api-access-mxps6\") pod \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\" (UID: \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\") " Feb 03 11:00:04 crc kubenswrapper[5010]: I0203 11:00:04.188425 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b32288df-fb1b-4b63-b699-4eabdb2a0cea-secret-volume\") pod \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\" (UID: \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\") " Feb 03 11:00:04 crc kubenswrapper[5010]: I0203 11:00:04.188631 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b32288df-fb1b-4b63-b699-4eabdb2a0cea-config-volume" (OuterVolumeSpecName: "config-volume") pod "b32288df-fb1b-4b63-b699-4eabdb2a0cea" (UID: "b32288df-fb1b-4b63-b699-4eabdb2a0cea"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 11:00:04 crc kubenswrapper[5010]: I0203 11:00:04.189316 5010 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b32288df-fb1b-4b63-b699-4eabdb2a0cea-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 11:00:04 crc kubenswrapper[5010]: I0203 11:00:04.196985 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b32288df-fb1b-4b63-b699-4eabdb2a0cea-kube-api-access-mxps6" (OuterVolumeSpecName: "kube-api-access-mxps6") pod "b32288df-fb1b-4b63-b699-4eabdb2a0cea" (UID: "b32288df-fb1b-4b63-b699-4eabdb2a0cea"). InnerVolumeSpecName "kube-api-access-mxps6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:00:04 crc kubenswrapper[5010]: I0203 11:00:04.199660 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b32288df-fb1b-4b63-b699-4eabdb2a0cea-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b32288df-fb1b-4b63-b699-4eabdb2a0cea" (UID: "b32288df-fb1b-4b63-b699-4eabdb2a0cea"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 11:00:04 crc kubenswrapper[5010]: I0203 11:00:04.291342 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxps6\" (UniqueName: \"kubernetes.io/projected/b32288df-fb1b-4b63-b699-4eabdb2a0cea-kube-api-access-mxps6\") on node \"crc\" DevicePath \"\"" Feb 03 11:00:04 crc kubenswrapper[5010]: I0203 11:00:04.291839 5010 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b32288df-fb1b-4b63-b699-4eabdb2a0cea-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 11:00:05 crc kubenswrapper[5010]: I0203 11:00:05.098924 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz"] Feb 03 11:00:05 crc kubenswrapper[5010]: I0203 11:00:05.133902 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz"] Feb 03 11:00:06 crc kubenswrapper[5010]: I0203 11:00:06.519352 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eae17d2-2362-4e78-908b-42fcb386ec60" path="/var/lib/kubelet/pods/0eae17d2-2362-4e78-908b-42fcb386ec60/volumes" Feb 03 11:00:23 crc kubenswrapper[5010]: I0203 11:00:23.699020 5010 scope.go:117] "RemoveContainer" containerID="73db75a439822b6dd55d522e4da89fbd20aa66ab67d412f72f9dfe07016f6245" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.329096 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pn7mc"] Feb 03 11:00:40 crc kubenswrapper[5010]: E0203 11:00:40.330617 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b32288df-fb1b-4b63-b699-4eabdb2a0cea" containerName="collect-profiles" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.330636 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="b32288df-fb1b-4b63-b699-4eabdb2a0cea" containerName="collect-profiles" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.330858 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="b32288df-fb1b-4b63-b699-4eabdb2a0cea" containerName="collect-profiles" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.332817 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.352343 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pn7mc"] Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.453476 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kmc6\" (UniqueName: \"kubernetes.io/projected/3b136e4b-d6df-4608-8e99-4d63efe1d513-kube-api-access-6kmc6\") pod \"redhat-operators-pn7mc\" (UID: \"3b136e4b-d6df-4608-8e99-4d63efe1d513\") " pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.453749 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b136e4b-d6df-4608-8e99-4d63efe1d513-catalog-content\") pod \"redhat-operators-pn7mc\" (UID: \"3b136e4b-d6df-4608-8e99-4d63efe1d513\") " pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.453929 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b136e4b-d6df-4608-8e99-4d63efe1d513-utilities\") pod \"redhat-operators-pn7mc\" (UID: \"3b136e4b-d6df-4608-8e99-4d63efe1d513\") " pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.557155 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kmc6\" (UniqueName: \"kubernetes.io/projected/3b136e4b-d6df-4608-8e99-4d63efe1d513-kube-api-access-6kmc6\") pod \"redhat-operators-pn7mc\" (UID: \"3b136e4b-d6df-4608-8e99-4d63efe1d513\") " pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.557618 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b136e4b-d6df-4608-8e99-4d63efe1d513-catalog-content\") pod \"redhat-operators-pn7mc\" (UID: \"3b136e4b-d6df-4608-8e99-4d63efe1d513\") " pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.557823 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b136e4b-d6df-4608-8e99-4d63efe1d513-utilities\") pod \"redhat-operators-pn7mc\" (UID: \"3b136e4b-d6df-4608-8e99-4d63efe1d513\") " pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.558277 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b136e4b-d6df-4608-8e99-4d63efe1d513-catalog-content\") pod \"redhat-operators-pn7mc\" (UID: \"3b136e4b-d6df-4608-8e99-4d63efe1d513\") " pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.558387 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b136e4b-d6df-4608-8e99-4d63efe1d513-utilities\") pod \"redhat-operators-pn7mc\" (UID: \"3b136e4b-d6df-4608-8e99-4d63efe1d513\") " pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.590028 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kmc6\" (UniqueName: \"kubernetes.io/projected/3b136e4b-d6df-4608-8e99-4d63efe1d513-kube-api-access-6kmc6\") pod \"redhat-operators-pn7mc\" (UID: \"3b136e4b-d6df-4608-8e99-4d63efe1d513\") " pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.655771 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:41 crc kubenswrapper[5010]: I0203 11:00:41.165065 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pn7mc"] Feb 03 11:00:41 crc kubenswrapper[5010]: I0203 11:00:41.529977 5010 generic.go:334] "Generic (PLEG): container finished" podID="3b136e4b-d6df-4608-8e99-4d63efe1d513" containerID="5fbaf14c88cad66c19b95c7039865f8c906e97a861524971a4a4ca118714fc0a" exitCode=0 Feb 03 11:00:41 crc kubenswrapper[5010]: I0203 11:00:41.530451 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn7mc" event={"ID":"3b136e4b-d6df-4608-8e99-4d63efe1d513","Type":"ContainerDied","Data":"5fbaf14c88cad66c19b95c7039865f8c906e97a861524971a4a4ca118714fc0a"} Feb 03 11:00:41 crc kubenswrapper[5010]: I0203 11:00:41.530594 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn7mc" event={"ID":"3b136e4b-d6df-4608-8e99-4d63efe1d513","Type":"ContainerStarted","Data":"bd74c4623e52fe568fd7ff3a820e2825e2272286970c6342fa508c26eaf7252a"} Feb 03 11:00:41 crc kubenswrapper[5010]: I0203 11:00:41.532224 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 11:00:42 crc kubenswrapper[5010]: I0203 11:00:42.544491 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn7mc" event={"ID":"3b136e4b-d6df-4608-8e99-4d63efe1d513","Type":"ContainerStarted","Data":"402bd9730a1a9d49f9ce6d70c4690569a37653003035d7e967a98cf100e3281b"} Feb 03 11:00:45 crc kubenswrapper[5010]: I0203 11:00:45.585778 5010 generic.go:334] "Generic (PLEG): container finished" podID="3b136e4b-d6df-4608-8e99-4d63efe1d513" containerID="402bd9730a1a9d49f9ce6d70c4690569a37653003035d7e967a98cf100e3281b" exitCode=0 Feb 03 11:00:45 crc kubenswrapper[5010]: I0203 11:00:45.586277 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn7mc" event={"ID":"3b136e4b-d6df-4608-8e99-4d63efe1d513","Type":"ContainerDied","Data":"402bd9730a1a9d49f9ce6d70c4690569a37653003035d7e967a98cf100e3281b"} Feb 03 11:00:47 crc kubenswrapper[5010]: I0203 11:00:47.607974 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn7mc" event={"ID":"3b136e4b-d6df-4608-8e99-4d63efe1d513","Type":"ContainerStarted","Data":"d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d"} Feb 03 11:00:47 crc kubenswrapper[5010]: I0203 11:00:47.633084 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pn7mc" podStartSLOduration=2.69933617 podStartE2EDuration="7.633065291s" podCreationTimestamp="2026-02-03 11:00:40 +0000 UTC" firstStartedPulling="2026-02-03 11:00:41.531947027 +0000 UTC m=+3511.687923156" lastFinishedPulling="2026-02-03 11:00:46.465676148 +0000 UTC m=+3516.621652277" observedRunningTime="2026-02-03 11:00:47.626514847 +0000 UTC m=+3517.782490976" watchObservedRunningTime="2026-02-03 11:00:47.633065291 +0000 UTC m=+3517.789041410" Feb 03 11:00:50 crc kubenswrapper[5010]: I0203 11:00:50.656707 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:50 crc kubenswrapper[5010]: I0203 11:00:50.658200 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:51 crc kubenswrapper[5010]: I0203 11:00:51.714145 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pn7mc" podUID="3b136e4b-d6df-4608-8e99-4d63efe1d513" containerName="registry-server" probeResult="failure" output=< Feb 03 11:00:51 crc kubenswrapper[5010]: timeout: failed to connect service ":50051" within 1s Feb 03 11:00:51 crc kubenswrapper[5010]: > Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.155288 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29501941-gv4sr"] Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.157924 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.174069 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29501941-gv4sr"] Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.318450 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-config-data\") pod \"keystone-cron-29501941-gv4sr\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.318585 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-fernet-keys\") pod \"keystone-cron-29501941-gv4sr\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.318618 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpcrj\" (UniqueName: \"kubernetes.io/projected/96c330a2-14f4-4923-8707-6b9cce98267f-kube-api-access-zpcrj\") pod \"keystone-cron-29501941-gv4sr\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.318651 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-combined-ca-bundle\") pod \"keystone-cron-29501941-gv4sr\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.420639 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-config-data\") pod \"keystone-cron-29501941-gv4sr\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.420777 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-fernet-keys\") pod \"keystone-cron-29501941-gv4sr\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.420813 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpcrj\" (UniqueName: \"kubernetes.io/projected/96c330a2-14f4-4923-8707-6b9cce98267f-kube-api-access-zpcrj\") pod \"keystone-cron-29501941-gv4sr\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.420853 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-combined-ca-bundle\") pod \"keystone-cron-29501941-gv4sr\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.427612 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-combined-ca-bundle\") pod \"keystone-cron-29501941-gv4sr\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.427813 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-config-data\") pod \"keystone-cron-29501941-gv4sr\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.429002 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-fernet-keys\") pod \"keystone-cron-29501941-gv4sr\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.440375 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpcrj\" (UniqueName: \"kubernetes.io/projected/96c330a2-14f4-4923-8707-6b9cce98267f-kube-api-access-zpcrj\") pod \"keystone-cron-29501941-gv4sr\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.480237 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.718667 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.777113 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.966345 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pn7mc"] Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.977241 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29501941-gv4sr"] Feb 03 11:01:01 crc kubenswrapper[5010]: I0203 11:01:01.769024 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29501941-gv4sr" event={"ID":"96c330a2-14f4-4923-8707-6b9cce98267f","Type":"ContainerStarted","Data":"02224ab559c551eecf6a9d4b9738db9679403937e8a11a5ef3eb2f054b61b9f4"} Feb 03 11:01:01 crc kubenswrapper[5010]: I0203 11:01:01.769408 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29501941-gv4sr" event={"ID":"96c330a2-14f4-4923-8707-6b9cce98267f","Type":"ContainerStarted","Data":"06fb52ad183ab788fc0bbae5e208e4038eec5dd6e3afd34dc9e60c51a49cf92f"} Feb 03 11:01:01 crc kubenswrapper[5010]: I0203 11:01:01.769201 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pn7mc" podUID="3b136e4b-d6df-4608-8e99-4d63efe1d513" containerName="registry-server" containerID="cri-o://d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d" gracePeriod=2 Feb 03 11:01:01 crc kubenswrapper[5010]: I0203 11:01:01.800042 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29501941-gv4sr" podStartSLOduration=1.8000245879999999 podStartE2EDuration="1.800024588s" podCreationTimestamp="2026-02-03 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 11:01:01.79772166 +0000 UTC m=+3531.953697789" watchObservedRunningTime="2026-02-03 11:01:01.800024588 +0000 UTC m=+3531.956000717" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.427446 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.572390 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kmc6\" (UniqueName: \"kubernetes.io/projected/3b136e4b-d6df-4608-8e99-4d63efe1d513-kube-api-access-6kmc6\") pod \"3b136e4b-d6df-4608-8e99-4d63efe1d513\" (UID: \"3b136e4b-d6df-4608-8e99-4d63efe1d513\") " Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.572600 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b136e4b-d6df-4608-8e99-4d63efe1d513-utilities\") pod \"3b136e4b-d6df-4608-8e99-4d63efe1d513\" (UID: \"3b136e4b-d6df-4608-8e99-4d63efe1d513\") " Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.572659 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b136e4b-d6df-4608-8e99-4d63efe1d513-catalog-content\") pod \"3b136e4b-d6df-4608-8e99-4d63efe1d513\" (UID: \"3b136e4b-d6df-4608-8e99-4d63efe1d513\") " Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.574958 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b136e4b-d6df-4608-8e99-4d63efe1d513-utilities" (OuterVolumeSpecName: "utilities") pod "3b136e4b-d6df-4608-8e99-4d63efe1d513" (UID: "3b136e4b-d6df-4608-8e99-4d63efe1d513"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.595664 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b136e4b-d6df-4608-8e99-4d63efe1d513-kube-api-access-6kmc6" (OuterVolumeSpecName: "kube-api-access-6kmc6") pod "3b136e4b-d6df-4608-8e99-4d63efe1d513" (UID: "3b136e4b-d6df-4608-8e99-4d63efe1d513"). InnerVolumeSpecName "kube-api-access-6kmc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.675512 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b136e4b-d6df-4608-8e99-4d63efe1d513-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.675555 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kmc6\" (UniqueName: \"kubernetes.io/projected/3b136e4b-d6df-4608-8e99-4d63efe1d513-kube-api-access-6kmc6\") on node \"crc\" DevicePath \"\"" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.743766 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b136e4b-d6df-4608-8e99-4d63efe1d513-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b136e4b-d6df-4608-8e99-4d63efe1d513" (UID: "3b136e4b-d6df-4608-8e99-4d63efe1d513"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.778246 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b136e4b-d6df-4608-8e99-4d63efe1d513-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.780980 5010 generic.go:334] "Generic (PLEG): container finished" podID="3b136e4b-d6df-4608-8e99-4d63efe1d513" containerID="d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d" exitCode=0 Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.781114 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.781231 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn7mc" event={"ID":"3b136e4b-d6df-4608-8e99-4d63efe1d513","Type":"ContainerDied","Data":"d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d"} Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.781366 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn7mc" event={"ID":"3b136e4b-d6df-4608-8e99-4d63efe1d513","Type":"ContainerDied","Data":"bd74c4623e52fe568fd7ff3a820e2825e2272286970c6342fa508c26eaf7252a"} Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.781400 5010 scope.go:117] "RemoveContainer" containerID="d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.832507 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pn7mc"] Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.835592 5010 scope.go:117] "RemoveContainer" containerID="402bd9730a1a9d49f9ce6d70c4690569a37653003035d7e967a98cf100e3281b" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.844587 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pn7mc"] Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.872465 5010 scope.go:117] "RemoveContainer" containerID="5fbaf14c88cad66c19b95c7039865f8c906e97a861524971a4a4ca118714fc0a" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.918590 5010 scope.go:117] "RemoveContainer" containerID="d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d" Feb 03 11:01:02 crc kubenswrapper[5010]: E0203 11:01:02.923994 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d\": container with ID starting with d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d not found: ID does not exist" containerID="d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.924053 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d"} err="failed to get container status \"d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d\": rpc error: code = NotFound desc = could not find container \"d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d\": container with ID starting with d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d not found: ID does not exist" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.924088 5010 scope.go:117] "RemoveContainer" containerID="402bd9730a1a9d49f9ce6d70c4690569a37653003035d7e967a98cf100e3281b" Feb 03 11:01:02 crc kubenswrapper[5010]: E0203 11:01:02.924473 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"402bd9730a1a9d49f9ce6d70c4690569a37653003035d7e967a98cf100e3281b\": container with ID starting with 402bd9730a1a9d49f9ce6d70c4690569a37653003035d7e967a98cf100e3281b not found: ID does not exist" containerID="402bd9730a1a9d49f9ce6d70c4690569a37653003035d7e967a98cf100e3281b" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.924510 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"402bd9730a1a9d49f9ce6d70c4690569a37653003035d7e967a98cf100e3281b"} err="failed to get container status \"402bd9730a1a9d49f9ce6d70c4690569a37653003035d7e967a98cf100e3281b\": rpc error: code = NotFound desc = could not find container \"402bd9730a1a9d49f9ce6d70c4690569a37653003035d7e967a98cf100e3281b\": container with ID starting with 402bd9730a1a9d49f9ce6d70c4690569a37653003035d7e967a98cf100e3281b not found: ID does not exist" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.924533 5010 scope.go:117] "RemoveContainer" containerID="5fbaf14c88cad66c19b95c7039865f8c906e97a861524971a4a4ca118714fc0a" Feb 03 11:01:02 crc kubenswrapper[5010]: E0203 11:01:02.924969 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fbaf14c88cad66c19b95c7039865f8c906e97a861524971a4a4ca118714fc0a\": container with ID starting with 5fbaf14c88cad66c19b95c7039865f8c906e97a861524971a4a4ca118714fc0a not found: ID does not exist" containerID="5fbaf14c88cad66c19b95c7039865f8c906e97a861524971a4a4ca118714fc0a" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.925000 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fbaf14c88cad66c19b95c7039865f8c906e97a861524971a4a4ca118714fc0a"} err="failed to get container status \"5fbaf14c88cad66c19b95c7039865f8c906e97a861524971a4a4ca118714fc0a\": rpc error: code = NotFound desc = could not find container \"5fbaf14c88cad66c19b95c7039865f8c906e97a861524971a4a4ca118714fc0a\": container with ID starting with 5fbaf14c88cad66c19b95c7039865f8c906e97a861524971a4a4ca118714fc0a not found: ID does not exist" Feb 03 11:01:04 crc kubenswrapper[5010]: I0203 11:01:04.514982 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b136e4b-d6df-4608-8e99-4d63efe1d513" path="/var/lib/kubelet/pods/3b136e4b-d6df-4608-8e99-4d63efe1d513/volumes" Feb 03 11:01:04 crc kubenswrapper[5010]: I0203 11:01:04.809512 5010 generic.go:334] "Generic (PLEG): container finished" podID="96c330a2-14f4-4923-8707-6b9cce98267f" containerID="02224ab559c551eecf6a9d4b9738db9679403937e8a11a5ef3eb2f054b61b9f4" exitCode=0 Feb 03 11:01:04 crc kubenswrapper[5010]: I0203 11:01:04.809573 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29501941-gv4sr" event={"ID":"96c330a2-14f4-4923-8707-6b9cce98267f","Type":"ContainerDied","Data":"02224ab559c551eecf6a9d4b9738db9679403937e8a11a5ef3eb2f054b61b9f4"} Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.273790 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.356592 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-config-data\") pod \"96c330a2-14f4-4923-8707-6b9cce98267f\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.356668 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-combined-ca-bundle\") pod \"96c330a2-14f4-4923-8707-6b9cce98267f\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.356773 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpcrj\" (UniqueName: \"kubernetes.io/projected/96c330a2-14f4-4923-8707-6b9cce98267f-kube-api-access-zpcrj\") pod \"96c330a2-14f4-4923-8707-6b9cce98267f\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.356869 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-fernet-keys\") pod \"96c330a2-14f4-4923-8707-6b9cce98267f\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.362683 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96c330a2-14f4-4923-8707-6b9cce98267f-kube-api-access-zpcrj" (OuterVolumeSpecName: "kube-api-access-zpcrj") pod "96c330a2-14f4-4923-8707-6b9cce98267f" (UID: "96c330a2-14f4-4923-8707-6b9cce98267f"). InnerVolumeSpecName "kube-api-access-zpcrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.378336 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "96c330a2-14f4-4923-8707-6b9cce98267f" (UID: "96c330a2-14f4-4923-8707-6b9cce98267f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.386956 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96c330a2-14f4-4923-8707-6b9cce98267f" (UID: "96c330a2-14f4-4923-8707-6b9cce98267f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.413094 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-config-data" (OuterVolumeSpecName: "config-data") pod "96c330a2-14f4-4923-8707-6b9cce98267f" (UID: "96c330a2-14f4-4923-8707-6b9cce98267f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.461990 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.462036 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.462053 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpcrj\" (UniqueName: \"kubernetes.io/projected/96c330a2-14f4-4923-8707-6b9cce98267f-kube-api-access-zpcrj\") on node \"crc\" DevicePath \"\"" Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.462065 5010 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.829545 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29501941-gv4sr" event={"ID":"96c330a2-14f4-4923-8707-6b9cce98267f","Type":"ContainerDied","Data":"06fb52ad183ab788fc0bbae5e208e4038eec5dd6e3afd34dc9e60c51a49cf92f"} Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.829875 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06fb52ad183ab788fc0bbae5e208e4038eec5dd6e3afd34dc9e60c51a49cf92f" Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.829602 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:46 crc kubenswrapper[5010]: I0203 11:01:46.390618 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:01:46 crc kubenswrapper[5010]: I0203 11:01:46.391727 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 11:02:16 crc kubenswrapper[5010]: I0203 11:02:16.389962 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:02:16 crc kubenswrapper[5010]: I0203 11:02:16.391018 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 11:02:46 crc kubenswrapper[5010]: I0203 11:02:46.389925 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:02:46 crc kubenswrapper[5010]: I0203 11:02:46.390501 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 11:02:46 crc kubenswrapper[5010]: I0203 11:02:46.390559 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 11:02:46 crc kubenswrapper[5010]: I0203 11:02:46.391559 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 11:02:46 crc kubenswrapper[5010]: I0203 11:02:46.391618 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" gracePeriod=600 Feb 03 11:02:46 crc kubenswrapper[5010]: E0203 11:02:46.527119 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:02:46 crc kubenswrapper[5010]: I0203 11:02:46.956781 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" exitCode=0 Feb 03 11:02:46 crc kubenswrapper[5010]: I0203 11:02:46.956841 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af"} Feb 03 11:02:46 crc kubenswrapper[5010]: I0203 11:02:46.956884 5010 scope.go:117] "RemoveContainer" containerID="954ea60c6e1c907175e18b080d65b7e14b322101b2585bb6251035ace6752460" Feb 03 11:02:46 crc kubenswrapper[5010]: I0203 11:02:46.957671 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:02:46 crc kubenswrapper[5010]: E0203 11:02:46.958022 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:03:01 crc kubenswrapper[5010]: I0203 11:03:01.502277 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:03:01 crc kubenswrapper[5010]: E0203 11:03:01.503097 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:03:14 crc kubenswrapper[5010]: I0203 11:03:14.502599 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:03:14 crc kubenswrapper[5010]: E0203 11:03:14.504816 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:03:26 crc kubenswrapper[5010]: I0203 11:03:26.503006 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:03:26 crc kubenswrapper[5010]: E0203 11:03:26.503751 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:03:38 crc kubenswrapper[5010]: I0203 11:03:38.503373 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:03:38 crc kubenswrapper[5010]: E0203 11:03:38.504573 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:03:49 crc kubenswrapper[5010]: I0203 11:03:49.502747 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:03:49 crc kubenswrapper[5010]: E0203 11:03:49.503472 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:04:01 crc kubenswrapper[5010]: I0203 11:04:01.502638 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:04:01 crc kubenswrapper[5010]: E0203 11:04:01.503911 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:04:12 crc kubenswrapper[5010]: I0203 11:04:12.503542 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:04:12 crc kubenswrapper[5010]: E0203 11:04:12.504459 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:04:17 crc kubenswrapper[5010]: I0203 11:04:17.869884 5010 generic.go:334] "Generic (PLEG): container finished" podID="8c8d92ab-5652-4bd9-81af-fd0be7aea36f" containerID="1dceb12710efc42bf7d1bc8254652d746deec954467b49662ae6e52ac9ca2747" exitCode=0 Feb 03 11:04:17 crc kubenswrapper[5010]: I0203 11:04:17.869950 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"8c8d92ab-5652-4bd9-81af-fd0be7aea36f","Type":"ContainerDied","Data":"1dceb12710efc42bf7d1bc8254652d746deec954467b49662ae6e52ac9ca2747"} Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.209435 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.263983 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-openstack-config-secret\") pod \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.264163 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.264252 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-config-data\") pod \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.264342 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45sks\" (UniqueName: \"kubernetes.io/projected/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-kube-api-access-45sks\") pod \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.264381 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-test-operator-ephemeral-workdir\") pod \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.264485 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-ca-certs\") pod \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.264519 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-ssh-key\") pod \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.264577 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-test-operator-ephemeral-temporary\") pod \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.264628 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-openstack-config\") pod \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.265673 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-config-data" (OuterVolumeSpecName: "config-data") pod "8c8d92ab-5652-4bd9-81af-fd0be7aea36f" (UID: "8c8d92ab-5652-4bd9-81af-fd0be7aea36f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.265998 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "8c8d92ab-5652-4bd9-81af-fd0be7aea36f" (UID: "8c8d92ab-5652-4bd9-81af-fd0be7aea36f"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.270412 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "8c8d92ab-5652-4bd9-81af-fd0be7aea36f" (UID: "8c8d92ab-5652-4bd9-81af-fd0be7aea36f"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.271098 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-kube-api-access-45sks" (OuterVolumeSpecName: "kube-api-access-45sks") pod "8c8d92ab-5652-4bd9-81af-fd0be7aea36f" (UID: "8c8d92ab-5652-4bd9-81af-fd0be7aea36f"). InnerVolumeSpecName "kube-api-access-45sks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.273649 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "8c8d92ab-5652-4bd9-81af-fd0be7aea36f" (UID: "8c8d92ab-5652-4bd9-81af-fd0be7aea36f"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.294637 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "8c8d92ab-5652-4bd9-81af-fd0be7aea36f" (UID: "8c8d92ab-5652-4bd9-81af-fd0be7aea36f"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.299367 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8c8d92ab-5652-4bd9-81af-fd0be7aea36f" (UID: "8c8d92ab-5652-4bd9-81af-fd0be7aea36f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.300119 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "8c8d92ab-5652-4bd9-81af-fd0be7aea36f" (UID: "8c8d92ab-5652-4bd9-81af-fd0be7aea36f"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.328801 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "8c8d92ab-5652-4bd9-81af-fd0be7aea36f" (UID: "8c8d92ab-5652-4bd9-81af-fd0be7aea36f"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.367904 5010 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.367944 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.367959 5010 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.367978 5010 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.367993 5010 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.368047 5010 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.368061 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.368073 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45sks\" (UniqueName: \"kubernetes.io/projected/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-kube-api-access-45sks\") on node \"crc\" DevicePath \"\"" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.368087 5010 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.390910 5010 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.469950 5010 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.896548 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"8c8d92ab-5652-4bd9-81af-fd0be7aea36f","Type":"ContainerDied","Data":"08d3852b3365aa6563a9026a76a312565c0566fd0792c861c656faa1a56176fa"} Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.896614 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08d3852b3365aa6563a9026a76a312565c0566fd0792c861c656faa1a56176fa" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.896628 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.659918 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 03 11:04:24 crc kubenswrapper[5010]: E0203 11:04:24.661097 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b136e4b-d6df-4608-8e99-4d63efe1d513" containerName="registry-server" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.661119 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b136e4b-d6df-4608-8e99-4d63efe1d513" containerName="registry-server" Feb 03 11:04:24 crc kubenswrapper[5010]: E0203 11:04:24.661134 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b136e4b-d6df-4608-8e99-4d63efe1d513" containerName="extract-content" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.661142 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b136e4b-d6df-4608-8e99-4d63efe1d513" containerName="extract-content" Feb 03 11:04:24 crc kubenswrapper[5010]: E0203 11:04:24.661158 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8d92ab-5652-4bd9-81af-fd0be7aea36f" containerName="tempest-tests-tempest-tests-runner" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.661169 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8d92ab-5652-4bd9-81af-fd0be7aea36f" containerName="tempest-tests-tempest-tests-runner" Feb 03 11:04:24 crc kubenswrapper[5010]: E0203 11:04:24.661200 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c330a2-14f4-4923-8707-6b9cce98267f" containerName="keystone-cron" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.661208 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c330a2-14f4-4923-8707-6b9cce98267f" containerName="keystone-cron" Feb 03 11:04:24 crc kubenswrapper[5010]: E0203 11:04:24.661263 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b136e4b-d6df-4608-8e99-4d63efe1d513" containerName="extract-utilities" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.661272 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b136e4b-d6df-4608-8e99-4d63efe1d513" containerName="extract-utilities" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.661501 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c330a2-14f4-4923-8707-6b9cce98267f" containerName="keystone-cron" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.661521 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b136e4b-d6df-4608-8e99-4d63efe1d513" containerName="registry-server" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.661536 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8d92ab-5652-4bd9-81af-fd0be7aea36f" containerName="tempest-tests-tempest-tests-runner" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.662486 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.665880 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-sbxfw" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.671786 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.793380 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jqzv\" (UniqueName: \"kubernetes.io/projected/8dfa1254-0d2c-4885-a531-fc90541692e7-kube-api-access-2jqzv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8dfa1254-0d2c-4885-a531-fc90541692e7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.793528 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8dfa1254-0d2c-4885-a531-fc90541692e7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.895666 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8dfa1254-0d2c-4885-a531-fc90541692e7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.895822 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jqzv\" (UniqueName: \"kubernetes.io/projected/8dfa1254-0d2c-4885-a531-fc90541692e7-kube-api-access-2jqzv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8dfa1254-0d2c-4885-a531-fc90541692e7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.896803 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8dfa1254-0d2c-4885-a531-fc90541692e7\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.929352 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jqzv\" (UniqueName: \"kubernetes.io/projected/8dfa1254-0d2c-4885-a531-fc90541692e7-kube-api-access-2jqzv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8dfa1254-0d2c-4885-a531-fc90541692e7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.929906 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8dfa1254-0d2c-4885-a531-fc90541692e7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.985699 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 11:04:25 crc kubenswrapper[5010]: I0203 11:04:25.543935 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 03 11:04:25 crc kubenswrapper[5010]: I0203 11:04:25.965564 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"8dfa1254-0d2c-4885-a531-fc90541692e7","Type":"ContainerStarted","Data":"e341c550b31d29eb33b1c0a71c63d307d4cc08c9d8897e30349883e45037a56e"} Feb 03 11:04:26 crc kubenswrapper[5010]: I0203 11:04:26.503529 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:04:26 crc kubenswrapper[5010]: E0203 11:04:26.503947 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:04:29 crc kubenswrapper[5010]: I0203 11:04:28.999670 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"8dfa1254-0d2c-4885-a531-fc90541692e7","Type":"ContainerStarted","Data":"a348e3b9174781a806094c750543012ef2237e2d290dc5b69e33c27024d730dc"} Feb 03 11:04:29 crc kubenswrapper[5010]: I0203 11:04:29.025287 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.7439766629999998 podStartE2EDuration="5.025261051s" podCreationTimestamp="2026-02-03 11:04:24 +0000 UTC" firstStartedPulling="2026-02-03 11:04:25.549635685 +0000 UTC m=+3735.705611814" lastFinishedPulling="2026-02-03 11:04:27.830920073 +0000 UTC m=+3737.986896202" observedRunningTime="2026-02-03 11:04:29.01885253 +0000 UTC m=+3739.174828649" watchObservedRunningTime="2026-02-03 11:04:29.025261051 +0000 UTC m=+3739.181237190" Feb 03 11:04:40 crc kubenswrapper[5010]: I0203 11:04:40.514417 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:04:40 crc kubenswrapper[5010]: E0203 11:04:40.515275 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:04:50 crc kubenswrapper[5010]: I0203 11:04:50.777776 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hfbsh/must-gather-hdcmp"] Feb 03 11:04:50 crc kubenswrapper[5010]: I0203 11:04:50.782176 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/must-gather-hdcmp" Feb 03 11:04:50 crc kubenswrapper[5010]: I0203 11:04:50.786556 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hfbsh"/"kube-root-ca.crt" Feb 03 11:04:50 crc kubenswrapper[5010]: I0203 11:04:50.786569 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-hfbsh"/"default-dockercfg-d5q4j" Feb 03 11:04:50 crc kubenswrapper[5010]: I0203 11:04:50.786801 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hfbsh"/"openshift-service-ca.crt" Feb 03 11:04:50 crc kubenswrapper[5010]: I0203 11:04:50.793492 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hfbsh/must-gather-hdcmp"] Feb 03 11:04:50 crc kubenswrapper[5010]: I0203 11:04:50.912476 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4jzz\" (UniqueName: \"kubernetes.io/projected/a60388dd-8e4d-463c-a5da-b210ae7c19fd-kube-api-access-t4jzz\") pod \"must-gather-hdcmp\" (UID: \"a60388dd-8e4d-463c-a5da-b210ae7c19fd\") " pod="openshift-must-gather-hfbsh/must-gather-hdcmp" Feb 03 11:04:50 crc kubenswrapper[5010]: I0203 11:04:50.912799 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a60388dd-8e4d-463c-a5da-b210ae7c19fd-must-gather-output\") pod \"must-gather-hdcmp\" (UID: \"a60388dd-8e4d-463c-a5da-b210ae7c19fd\") " pod="openshift-must-gather-hfbsh/must-gather-hdcmp" Feb 03 11:04:51 crc kubenswrapper[5010]: I0203 11:04:51.015995 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4jzz\" (UniqueName: \"kubernetes.io/projected/a60388dd-8e4d-463c-a5da-b210ae7c19fd-kube-api-access-t4jzz\") pod \"must-gather-hdcmp\" (UID: \"a60388dd-8e4d-463c-a5da-b210ae7c19fd\") " pod="openshift-must-gather-hfbsh/must-gather-hdcmp" Feb 03 11:04:51 crc kubenswrapper[5010]: I0203 11:04:51.016076 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a60388dd-8e4d-463c-a5da-b210ae7c19fd-must-gather-output\") pod \"must-gather-hdcmp\" (UID: \"a60388dd-8e4d-463c-a5da-b210ae7c19fd\") " pod="openshift-must-gather-hfbsh/must-gather-hdcmp" Feb 03 11:04:51 crc kubenswrapper[5010]: I0203 11:04:51.016527 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a60388dd-8e4d-463c-a5da-b210ae7c19fd-must-gather-output\") pod \"must-gather-hdcmp\" (UID: \"a60388dd-8e4d-463c-a5da-b210ae7c19fd\") " pod="openshift-must-gather-hfbsh/must-gather-hdcmp" Feb 03 11:04:51 crc kubenswrapper[5010]: I0203 11:04:51.040887 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4jzz\" (UniqueName: \"kubernetes.io/projected/a60388dd-8e4d-463c-a5da-b210ae7c19fd-kube-api-access-t4jzz\") pod \"must-gather-hdcmp\" (UID: \"a60388dd-8e4d-463c-a5da-b210ae7c19fd\") " pod="openshift-must-gather-hfbsh/must-gather-hdcmp" Feb 03 11:04:51 crc kubenswrapper[5010]: I0203 11:04:51.105432 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/must-gather-hdcmp" Feb 03 11:04:51 crc kubenswrapper[5010]: I0203 11:04:51.502690 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:04:51 crc kubenswrapper[5010]: E0203 11:04:51.503517 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:04:51 crc kubenswrapper[5010]: I0203 11:04:51.645255 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hfbsh/must-gather-hdcmp"] Feb 03 11:04:52 crc kubenswrapper[5010]: I0203 11:04:52.243935 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfbsh/must-gather-hdcmp" event={"ID":"a60388dd-8e4d-463c-a5da-b210ae7c19fd","Type":"ContainerStarted","Data":"733196c23cec8a07b2e963207170368dc3a4f7a3b1625d9daceaf99fb3062f38"} Feb 03 11:04:56 crc kubenswrapper[5010]: I0203 11:04:56.303414 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfbsh/must-gather-hdcmp" event={"ID":"a60388dd-8e4d-463c-a5da-b210ae7c19fd","Type":"ContainerStarted","Data":"f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96"} Feb 03 11:04:57 crc kubenswrapper[5010]: I0203 11:04:57.319980 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfbsh/must-gather-hdcmp" event={"ID":"a60388dd-8e4d-463c-a5da-b210ae7c19fd","Type":"ContainerStarted","Data":"d0ca9d650c03f28692690ebdf474ad1d46e17199923f41abd227022ab4dd0774"} Feb 03 11:04:57 crc kubenswrapper[5010]: I0203 11:04:57.349165 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hfbsh/must-gather-hdcmp" podStartSLOduration=3.185955369 podStartE2EDuration="7.349135453s" podCreationTimestamp="2026-02-03 11:04:50 +0000 UTC" firstStartedPulling="2026-02-03 11:04:51.627906193 +0000 UTC m=+3761.783882322" lastFinishedPulling="2026-02-03 11:04:55.791086277 +0000 UTC m=+3765.947062406" observedRunningTime="2026-02-03 11:04:57.339862332 +0000 UTC m=+3767.495838471" watchObservedRunningTime="2026-02-03 11:04:57.349135453 +0000 UTC m=+3767.505111582" Feb 03 11:05:00 crc kubenswrapper[5010]: I0203 11:05:00.845114 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hfbsh/crc-debug-knxkc"] Feb 03 11:05:00 crc kubenswrapper[5010]: I0203 11:05:00.848075 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/crc-debug-knxkc" Feb 03 11:05:00 crc kubenswrapper[5010]: I0203 11:05:00.972552 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd16c451-5cc4-448a-b612-059a4c677f3a-host\") pod \"crc-debug-knxkc\" (UID: \"dd16c451-5cc4-448a-b612-059a4c677f3a\") " pod="openshift-must-gather-hfbsh/crc-debug-knxkc" Feb 03 11:05:00 crc kubenswrapper[5010]: I0203 11:05:00.972752 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2n55\" (UniqueName: \"kubernetes.io/projected/dd16c451-5cc4-448a-b612-059a4c677f3a-kube-api-access-x2n55\") pod \"crc-debug-knxkc\" (UID: \"dd16c451-5cc4-448a-b612-059a4c677f3a\") " pod="openshift-must-gather-hfbsh/crc-debug-knxkc" Feb 03 11:05:01 crc kubenswrapper[5010]: I0203 11:05:01.075418 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd16c451-5cc4-448a-b612-059a4c677f3a-host\") pod \"crc-debug-knxkc\" (UID: \"dd16c451-5cc4-448a-b612-059a4c677f3a\") " pod="openshift-must-gather-hfbsh/crc-debug-knxkc" Feb 03 11:05:01 crc kubenswrapper[5010]: I0203 11:05:01.075535 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2n55\" (UniqueName: \"kubernetes.io/projected/dd16c451-5cc4-448a-b612-059a4c677f3a-kube-api-access-x2n55\") pod \"crc-debug-knxkc\" (UID: \"dd16c451-5cc4-448a-b612-059a4c677f3a\") " pod="openshift-must-gather-hfbsh/crc-debug-knxkc" Feb 03 11:05:01 crc kubenswrapper[5010]: I0203 11:05:01.075635 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd16c451-5cc4-448a-b612-059a4c677f3a-host\") pod \"crc-debug-knxkc\" (UID: \"dd16c451-5cc4-448a-b612-059a4c677f3a\") " pod="openshift-must-gather-hfbsh/crc-debug-knxkc" Feb 03 11:05:01 crc kubenswrapper[5010]: I0203 11:05:01.109014 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2n55\" (UniqueName: \"kubernetes.io/projected/dd16c451-5cc4-448a-b612-059a4c677f3a-kube-api-access-x2n55\") pod \"crc-debug-knxkc\" (UID: \"dd16c451-5cc4-448a-b612-059a4c677f3a\") " pod="openshift-must-gather-hfbsh/crc-debug-knxkc" Feb 03 11:05:01 crc kubenswrapper[5010]: I0203 11:05:01.171420 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/crc-debug-knxkc" Feb 03 11:05:01 crc kubenswrapper[5010]: I0203 11:05:01.379425 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfbsh/crc-debug-knxkc" event={"ID":"dd16c451-5cc4-448a-b612-059a4c677f3a","Type":"ContainerStarted","Data":"c1ab1788d82b88c9a9c9bced47ba87ac4f6c2b40b93983006b0e6ecb867d4af2"} Feb 03 11:05:04 crc kubenswrapper[5010]: I0203 11:05:04.504294 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:05:04 crc kubenswrapper[5010]: E0203 11:05:04.505365 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:05:15 crc kubenswrapper[5010]: I0203 11:05:15.504145 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:05:15 crc kubenswrapper[5010]: E0203 11:05:15.506755 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:05:17 crc kubenswrapper[5010]: E0203 11:05:17.144863 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Feb 03 11:05:17 crc kubenswrapper[5010]: E0203 11:05:17.145368 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2n55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-knxkc_openshift-must-gather-hfbsh(dd16c451-5cc4-448a-b612-059a4c677f3a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 11:05:17 crc kubenswrapper[5010]: E0203 11:05:17.147049 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-hfbsh/crc-debug-knxkc" podUID="dd16c451-5cc4-448a-b612-059a4c677f3a" Feb 03 11:05:17 crc kubenswrapper[5010]: E0203 11:05:17.571974 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-hfbsh/crc-debug-knxkc" podUID="dd16c451-5cc4-448a-b612-059a4c677f3a" Feb 03 11:05:27 crc kubenswrapper[5010]: I0203 11:05:27.502409 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:05:27 crc kubenswrapper[5010]: E0203 11:05:27.505121 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:05:31 crc kubenswrapper[5010]: I0203 11:05:31.712100 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfbsh/crc-debug-knxkc" event={"ID":"dd16c451-5cc4-448a-b612-059a4c677f3a","Type":"ContainerStarted","Data":"1f9a8d3208b3a091c4939acca4f01ee3cd93e0bcc6269bf3b3f3541f7c35fd87"} Feb 03 11:05:31 crc kubenswrapper[5010]: I0203 11:05:31.739027 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hfbsh/crc-debug-knxkc" podStartSLOduration=1.905847362 podStartE2EDuration="31.739000879s" podCreationTimestamp="2026-02-03 11:05:00 +0000 UTC" firstStartedPulling="2026-02-03 11:05:01.243893754 +0000 UTC m=+3771.399869883" lastFinishedPulling="2026-02-03 11:05:31.077047251 +0000 UTC m=+3801.233023400" observedRunningTime="2026-02-03 11:05:31.730737003 +0000 UTC m=+3801.886713132" watchObservedRunningTime="2026-02-03 11:05:31.739000879 +0000 UTC m=+3801.894977008" Feb 03 11:05:38 crc kubenswrapper[5010]: I0203 11:05:38.502988 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:05:38 crc kubenswrapper[5010]: E0203 11:05:38.503756 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:05:50 crc kubenswrapper[5010]: I0203 11:05:50.513018 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:05:50 crc kubenswrapper[5010]: E0203 11:05:50.514356 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:06:00 crc kubenswrapper[5010]: I0203 11:06:00.959084 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j9shv"] Feb 03 11:06:00 crc kubenswrapper[5010]: I0203 11:06:00.962152 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:00 crc kubenswrapper[5010]: I0203 11:06:00.984513 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9shv"] Feb 03 11:06:01 crc kubenswrapper[5010]: I0203 11:06:01.052969 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b0797d-7099-4ce0-a9a7-063e41fce220-utilities\") pod \"redhat-marketplace-j9shv\" (UID: \"96b0797d-7099-4ce0-a9a7-063e41fce220\") " pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:01 crc kubenswrapper[5010]: I0203 11:06:01.053021 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gll69\" (UniqueName: \"kubernetes.io/projected/96b0797d-7099-4ce0-a9a7-063e41fce220-kube-api-access-gll69\") pod \"redhat-marketplace-j9shv\" (UID: \"96b0797d-7099-4ce0-a9a7-063e41fce220\") " pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:01 crc kubenswrapper[5010]: I0203 11:06:01.053085 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b0797d-7099-4ce0-a9a7-063e41fce220-catalog-content\") pod \"redhat-marketplace-j9shv\" (UID: \"96b0797d-7099-4ce0-a9a7-063e41fce220\") " pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:01 crc kubenswrapper[5010]: I0203 11:06:01.155283 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b0797d-7099-4ce0-a9a7-063e41fce220-catalog-content\") pod \"redhat-marketplace-j9shv\" (UID: \"96b0797d-7099-4ce0-a9a7-063e41fce220\") " pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:01 crc kubenswrapper[5010]: I0203 11:06:01.155501 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b0797d-7099-4ce0-a9a7-063e41fce220-utilities\") pod \"redhat-marketplace-j9shv\" (UID: \"96b0797d-7099-4ce0-a9a7-063e41fce220\") " pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:01 crc kubenswrapper[5010]: I0203 11:06:01.155542 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gll69\" (UniqueName: \"kubernetes.io/projected/96b0797d-7099-4ce0-a9a7-063e41fce220-kube-api-access-gll69\") pod \"redhat-marketplace-j9shv\" (UID: \"96b0797d-7099-4ce0-a9a7-063e41fce220\") " pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:01 crc kubenswrapper[5010]: I0203 11:06:01.155993 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b0797d-7099-4ce0-a9a7-063e41fce220-catalog-content\") pod \"redhat-marketplace-j9shv\" (UID: \"96b0797d-7099-4ce0-a9a7-063e41fce220\") " pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:01 crc kubenswrapper[5010]: I0203 11:06:01.156531 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b0797d-7099-4ce0-a9a7-063e41fce220-utilities\") pod \"redhat-marketplace-j9shv\" (UID: \"96b0797d-7099-4ce0-a9a7-063e41fce220\") " pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:01 crc kubenswrapper[5010]: I0203 11:06:01.178700 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gll69\" (UniqueName: \"kubernetes.io/projected/96b0797d-7099-4ce0-a9a7-063e41fce220-kube-api-access-gll69\") pod \"redhat-marketplace-j9shv\" (UID: \"96b0797d-7099-4ce0-a9a7-063e41fce220\") " pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:01 crc kubenswrapper[5010]: I0203 11:06:01.290679 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:01 crc kubenswrapper[5010]: I0203 11:06:01.838974 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9shv"] Feb 03 11:06:02 crc kubenswrapper[5010]: I0203 11:06:02.060717 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9shv" event={"ID":"96b0797d-7099-4ce0-a9a7-063e41fce220","Type":"ContainerStarted","Data":"5cc6fe406958620e7e04ce434688e594444745b347a57f9de5721db9cf7c2290"} Feb 03 11:06:03 crc kubenswrapper[5010]: I0203 11:06:03.072703 5010 generic.go:334] "Generic (PLEG): container finished" podID="96b0797d-7099-4ce0-a9a7-063e41fce220" containerID="43d13dea32f096eb53a920692ae12df4fb1b47317c47714feb239e848ec608c7" exitCode=0 Feb 03 11:06:03 crc kubenswrapper[5010]: I0203 11:06:03.074312 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9shv" event={"ID":"96b0797d-7099-4ce0-a9a7-063e41fce220","Type":"ContainerDied","Data":"43d13dea32f096eb53a920692ae12df4fb1b47317c47714feb239e848ec608c7"} Feb 03 11:06:03 crc kubenswrapper[5010]: I0203 11:06:03.075116 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 11:06:03 crc kubenswrapper[5010]: I0203 11:06:03.503517 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:06:03 crc kubenswrapper[5010]: E0203 11:06:03.504392 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:06:04 crc kubenswrapper[5010]: I0203 11:06:04.083733 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9shv" event={"ID":"96b0797d-7099-4ce0-a9a7-063e41fce220","Type":"ContainerStarted","Data":"903326d9d70f485a88c6e24a923a949831ab03ba6b183d1bfa4f835a7f60f4f4"} Feb 03 11:06:05 crc kubenswrapper[5010]: I0203 11:06:05.096252 5010 generic.go:334] "Generic (PLEG): container finished" podID="96b0797d-7099-4ce0-a9a7-063e41fce220" containerID="903326d9d70f485a88c6e24a923a949831ab03ba6b183d1bfa4f835a7f60f4f4" exitCode=0 Feb 03 11:06:05 crc kubenswrapper[5010]: I0203 11:06:05.096394 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9shv" event={"ID":"96b0797d-7099-4ce0-a9a7-063e41fce220","Type":"ContainerDied","Data":"903326d9d70f485a88c6e24a923a949831ab03ba6b183d1bfa4f835a7f60f4f4"} Feb 03 11:06:06 crc kubenswrapper[5010]: I0203 11:06:06.109162 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9shv" event={"ID":"96b0797d-7099-4ce0-a9a7-063e41fce220","Type":"ContainerStarted","Data":"ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8"} Feb 03 11:06:06 crc kubenswrapper[5010]: I0203 11:06:06.139297 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j9shv" podStartSLOduration=3.5060776970000003 podStartE2EDuration="6.139273358s" podCreationTimestamp="2026-02-03 11:06:00 +0000 UTC" firstStartedPulling="2026-02-03 11:06:03.074848749 +0000 UTC m=+3833.230824868" lastFinishedPulling="2026-02-03 11:06:05.7080444 +0000 UTC m=+3835.864020529" observedRunningTime="2026-02-03 11:06:06.132748275 +0000 UTC m=+3836.288724424" watchObservedRunningTime="2026-02-03 11:06:06.139273358 +0000 UTC m=+3836.295249487" Feb 03 11:06:11 crc kubenswrapper[5010]: I0203 11:06:11.291303 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:11 crc kubenswrapper[5010]: I0203 11:06:11.291722 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:11 crc kubenswrapper[5010]: I0203 11:06:11.343057 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:12 crc kubenswrapper[5010]: I0203 11:06:12.230686 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:12 crc kubenswrapper[5010]: I0203 11:06:12.293444 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9shv"] Feb 03 11:06:14 crc kubenswrapper[5010]: I0203 11:06:14.187008 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j9shv" podUID="96b0797d-7099-4ce0-a9a7-063e41fce220" containerName="registry-server" containerID="cri-o://ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8" gracePeriod=2 Feb 03 11:06:14 crc kubenswrapper[5010]: I0203 11:06:14.507325 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:06:14 crc kubenswrapper[5010]: E0203 11:06:14.507558 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:06:14 crc kubenswrapper[5010]: I0203 11:06:14.866998 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:14 crc kubenswrapper[5010]: I0203 11:06:14.996788 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b0797d-7099-4ce0-a9a7-063e41fce220-utilities\") pod \"96b0797d-7099-4ce0-a9a7-063e41fce220\" (UID: \"96b0797d-7099-4ce0-a9a7-063e41fce220\") " Feb 03 11:06:14 crc kubenswrapper[5010]: I0203 11:06:14.997205 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gll69\" (UniqueName: \"kubernetes.io/projected/96b0797d-7099-4ce0-a9a7-063e41fce220-kube-api-access-gll69\") pod \"96b0797d-7099-4ce0-a9a7-063e41fce220\" (UID: \"96b0797d-7099-4ce0-a9a7-063e41fce220\") " Feb 03 11:06:14 crc kubenswrapper[5010]: I0203 11:06:14.997349 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b0797d-7099-4ce0-a9a7-063e41fce220-catalog-content\") pod \"96b0797d-7099-4ce0-a9a7-063e41fce220\" (UID: \"96b0797d-7099-4ce0-a9a7-063e41fce220\") " Feb 03 11:06:14 crc kubenswrapper[5010]: I0203 11:06:14.998059 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96b0797d-7099-4ce0-a9a7-063e41fce220-utilities" (OuterVolumeSpecName: "utilities") pod "96b0797d-7099-4ce0-a9a7-063e41fce220" (UID: "96b0797d-7099-4ce0-a9a7-063e41fce220"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.014028 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b0797d-7099-4ce0-a9a7-063e41fce220-kube-api-access-gll69" (OuterVolumeSpecName: "kube-api-access-gll69") pod "96b0797d-7099-4ce0-a9a7-063e41fce220" (UID: "96b0797d-7099-4ce0-a9a7-063e41fce220"). InnerVolumeSpecName "kube-api-access-gll69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.026982 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96b0797d-7099-4ce0-a9a7-063e41fce220-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96b0797d-7099-4ce0-a9a7-063e41fce220" (UID: "96b0797d-7099-4ce0-a9a7-063e41fce220"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.101021 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gll69\" (UniqueName: \"kubernetes.io/projected/96b0797d-7099-4ce0-a9a7-063e41fce220-kube-api-access-gll69\") on node \"crc\" DevicePath \"\"" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.101391 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b0797d-7099-4ce0-a9a7-063e41fce220-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.101532 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b0797d-7099-4ce0-a9a7-063e41fce220-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.215366 5010 generic.go:334] "Generic (PLEG): container finished" podID="96b0797d-7099-4ce0-a9a7-063e41fce220" containerID="ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8" exitCode=0 Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.215426 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.215442 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9shv" event={"ID":"96b0797d-7099-4ce0-a9a7-063e41fce220","Type":"ContainerDied","Data":"ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8"} Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.215489 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9shv" event={"ID":"96b0797d-7099-4ce0-a9a7-063e41fce220","Type":"ContainerDied","Data":"5cc6fe406958620e7e04ce434688e594444745b347a57f9de5721db9cf7c2290"} Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.215537 5010 scope.go:117] "RemoveContainer" containerID="ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.244330 5010 scope.go:117] "RemoveContainer" containerID="903326d9d70f485a88c6e24a923a949831ab03ba6b183d1bfa4f835a7f60f4f4" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.260499 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9shv"] Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.272483 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9shv"] Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.277735 5010 scope.go:117] "RemoveContainer" containerID="43d13dea32f096eb53a920692ae12df4fb1b47317c47714feb239e848ec608c7" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.344474 5010 scope.go:117] "RemoveContainer" containerID="ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8" Feb 03 11:06:15 crc kubenswrapper[5010]: E0203 11:06:15.344975 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8\": container with ID starting with ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8 not found: ID does not exist" containerID="ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.345025 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8"} err="failed to get container status \"ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8\": rpc error: code = NotFound desc = could not find container \"ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8\": container with ID starting with ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8 not found: ID does not exist" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.345055 5010 scope.go:117] "RemoveContainer" containerID="903326d9d70f485a88c6e24a923a949831ab03ba6b183d1bfa4f835a7f60f4f4" Feb 03 11:06:15 crc kubenswrapper[5010]: E0203 11:06:15.345454 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"903326d9d70f485a88c6e24a923a949831ab03ba6b183d1bfa4f835a7f60f4f4\": container with ID starting with 903326d9d70f485a88c6e24a923a949831ab03ba6b183d1bfa4f835a7f60f4f4 not found: ID does not exist" containerID="903326d9d70f485a88c6e24a923a949831ab03ba6b183d1bfa4f835a7f60f4f4" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.345503 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"903326d9d70f485a88c6e24a923a949831ab03ba6b183d1bfa4f835a7f60f4f4"} err="failed to get container status \"903326d9d70f485a88c6e24a923a949831ab03ba6b183d1bfa4f835a7f60f4f4\": rpc error: code = NotFound desc = could not find container \"903326d9d70f485a88c6e24a923a949831ab03ba6b183d1bfa4f835a7f60f4f4\": container with ID starting with 903326d9d70f485a88c6e24a923a949831ab03ba6b183d1bfa4f835a7f60f4f4 not found: ID does not exist" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.345522 5010 scope.go:117] "RemoveContainer" containerID="43d13dea32f096eb53a920692ae12df4fb1b47317c47714feb239e848ec608c7" Feb 03 11:06:15 crc kubenswrapper[5010]: E0203 11:06:15.345862 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43d13dea32f096eb53a920692ae12df4fb1b47317c47714feb239e848ec608c7\": container with ID starting with 43d13dea32f096eb53a920692ae12df4fb1b47317c47714feb239e848ec608c7 not found: ID does not exist" containerID="43d13dea32f096eb53a920692ae12df4fb1b47317c47714feb239e848ec608c7" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.345890 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43d13dea32f096eb53a920692ae12df4fb1b47317c47714feb239e848ec608c7"} err="failed to get container status \"43d13dea32f096eb53a920692ae12df4fb1b47317c47714feb239e848ec608c7\": rpc error: code = NotFound desc = could not find container \"43d13dea32f096eb53a920692ae12df4fb1b47317c47714feb239e848ec608c7\": container with ID starting with 43d13dea32f096eb53a920692ae12df4fb1b47317c47714feb239e848ec608c7 not found: ID does not exist" Feb 03 11:06:16 crc kubenswrapper[5010]: I0203 11:06:16.519802 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b0797d-7099-4ce0-a9a7-063e41fce220" path="/var/lib/kubelet/pods/96b0797d-7099-4ce0-a9a7-063e41fce220/volumes" Feb 03 11:06:19 crc kubenswrapper[5010]: I0203 11:06:19.265735 5010 generic.go:334] "Generic (PLEG): container finished" podID="dd16c451-5cc4-448a-b612-059a4c677f3a" containerID="1f9a8d3208b3a091c4939acca4f01ee3cd93e0bcc6269bf3b3f3541f7c35fd87" exitCode=0 Feb 03 11:06:19 crc kubenswrapper[5010]: I0203 11:06:19.265826 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfbsh/crc-debug-knxkc" event={"ID":"dd16c451-5cc4-448a-b612-059a4c677f3a","Type":"ContainerDied","Data":"1f9a8d3208b3a091c4939acca4f01ee3cd93e0bcc6269bf3b3f3541f7c35fd87"} Feb 03 11:06:20 crc kubenswrapper[5010]: I0203 11:06:20.417541 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/crc-debug-knxkc" Feb 03 11:06:20 crc kubenswrapper[5010]: I0203 11:06:20.425113 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2n55\" (UniqueName: \"kubernetes.io/projected/dd16c451-5cc4-448a-b612-059a4c677f3a-kube-api-access-x2n55\") pod \"dd16c451-5cc4-448a-b612-059a4c677f3a\" (UID: \"dd16c451-5cc4-448a-b612-059a4c677f3a\") " Feb 03 11:06:20 crc kubenswrapper[5010]: I0203 11:06:20.433878 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd16c451-5cc4-448a-b612-059a4c677f3a-kube-api-access-x2n55" (OuterVolumeSpecName: "kube-api-access-x2n55") pod "dd16c451-5cc4-448a-b612-059a4c677f3a" (UID: "dd16c451-5cc4-448a-b612-059a4c677f3a"). InnerVolumeSpecName "kube-api-access-x2n55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:06:20 crc kubenswrapper[5010]: I0203 11:06:20.478168 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hfbsh/crc-debug-knxkc"] Feb 03 11:06:20 crc kubenswrapper[5010]: I0203 11:06:20.488630 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hfbsh/crc-debug-knxkc"] Feb 03 11:06:20 crc kubenswrapper[5010]: I0203 11:06:20.530417 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd16c451-5cc4-448a-b612-059a4c677f3a-host\") pod \"dd16c451-5cc4-448a-b612-059a4c677f3a\" (UID: \"dd16c451-5cc4-448a-b612-059a4c677f3a\") " Feb 03 11:06:20 crc kubenswrapper[5010]: I0203 11:06:20.530642 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd16c451-5cc4-448a-b612-059a4c677f3a-host" (OuterVolumeSpecName: "host") pod "dd16c451-5cc4-448a-b612-059a4c677f3a" (UID: "dd16c451-5cc4-448a-b612-059a4c677f3a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 11:06:20 crc kubenswrapper[5010]: I0203 11:06:20.531423 5010 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd16c451-5cc4-448a-b612-059a4c677f3a-host\") on node \"crc\" DevicePath \"\"" Feb 03 11:06:20 crc kubenswrapper[5010]: I0203 11:06:20.531460 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2n55\" (UniqueName: \"kubernetes.io/projected/dd16c451-5cc4-448a-b612-059a4c677f3a-kube-api-access-x2n55\") on node \"crc\" DevicePath \"\"" Feb 03 11:06:20 crc kubenswrapper[5010]: I0203 11:06:20.533160 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd16c451-5cc4-448a-b612-059a4c677f3a" path="/var/lib/kubelet/pods/dd16c451-5cc4-448a-b612-059a4c677f3a/volumes" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.293483 5010 scope.go:117] "RemoveContainer" containerID="1f9a8d3208b3a091c4939acca4f01ee3cd93e0bcc6269bf3b3f3541f7c35fd87" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.293569 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/crc-debug-knxkc" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.658006 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hfbsh/crc-debug-dv2q8"] Feb 03 11:06:21 crc kubenswrapper[5010]: E0203 11:06:21.658665 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b0797d-7099-4ce0-a9a7-063e41fce220" containerName="extract-content" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.658678 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b0797d-7099-4ce0-a9a7-063e41fce220" containerName="extract-content" Feb 03 11:06:21 crc kubenswrapper[5010]: E0203 11:06:21.658691 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd16c451-5cc4-448a-b612-059a4c677f3a" containerName="container-00" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.658696 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd16c451-5cc4-448a-b612-059a4c677f3a" containerName="container-00" Feb 03 11:06:21 crc kubenswrapper[5010]: E0203 11:06:21.658725 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b0797d-7099-4ce0-a9a7-063e41fce220" containerName="extract-utilities" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.658733 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b0797d-7099-4ce0-a9a7-063e41fce220" containerName="extract-utilities" Feb 03 11:06:21 crc kubenswrapper[5010]: E0203 11:06:21.658750 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b0797d-7099-4ce0-a9a7-063e41fce220" containerName="registry-server" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.658756 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b0797d-7099-4ce0-a9a7-063e41fce220" containerName="registry-server" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.658946 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd16c451-5cc4-448a-b612-059a4c677f3a" containerName="container-00" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.658970 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b0797d-7099-4ce0-a9a7-063e41fce220" containerName="registry-server" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.659617 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.760429 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5gcr\" (UniqueName: \"kubernetes.io/projected/4cc3c54a-befe-4c86-8ae8-e0759feb54be-kube-api-access-v5gcr\") pod \"crc-debug-dv2q8\" (UID: \"4cc3c54a-befe-4c86-8ae8-e0759feb54be\") " pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.760526 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cc3c54a-befe-4c86-8ae8-e0759feb54be-host\") pod \"crc-debug-dv2q8\" (UID: \"4cc3c54a-befe-4c86-8ae8-e0759feb54be\") " pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.862475 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5gcr\" (UniqueName: \"kubernetes.io/projected/4cc3c54a-befe-4c86-8ae8-e0759feb54be-kube-api-access-v5gcr\") pod \"crc-debug-dv2q8\" (UID: \"4cc3c54a-befe-4c86-8ae8-e0759feb54be\") " pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.862606 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cc3c54a-befe-4c86-8ae8-e0759feb54be-host\") pod \"crc-debug-dv2q8\" (UID: \"4cc3c54a-befe-4c86-8ae8-e0759feb54be\") " pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.862731 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cc3c54a-befe-4c86-8ae8-e0759feb54be-host\") pod \"crc-debug-dv2q8\" (UID: \"4cc3c54a-befe-4c86-8ae8-e0759feb54be\") " pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.883688 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5gcr\" (UniqueName: \"kubernetes.io/projected/4cc3c54a-befe-4c86-8ae8-e0759feb54be-kube-api-access-v5gcr\") pod \"crc-debug-dv2q8\" (UID: \"4cc3c54a-befe-4c86-8ae8-e0759feb54be\") " pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.988391 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" Feb 03 11:06:22 crc kubenswrapper[5010]: I0203 11:06:22.308942 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" event={"ID":"4cc3c54a-befe-4c86-8ae8-e0759feb54be","Type":"ContainerStarted","Data":"15af73ffcb8b076a414d8d6a10ce7a3a25d6b8b42f27c224aa60cf656f8481d3"} Feb 03 11:06:22 crc kubenswrapper[5010]: I0203 11:06:22.309348 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" event={"ID":"4cc3c54a-befe-4c86-8ae8-e0759feb54be","Type":"ContainerStarted","Data":"79807be93a018e5e3e7ee81fcbdd530b0a73975e4ce12033454593dbc2394f7c"} Feb 03 11:06:23 crc kubenswrapper[5010]: I0203 11:06:23.320828 5010 generic.go:334] "Generic (PLEG): container finished" podID="4cc3c54a-befe-4c86-8ae8-e0759feb54be" containerID="15af73ffcb8b076a414d8d6a10ce7a3a25d6b8b42f27c224aa60cf656f8481d3" exitCode=0 Feb 03 11:06:23 crc kubenswrapper[5010]: I0203 11:06:23.320898 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" event={"ID":"4cc3c54a-befe-4c86-8ae8-e0759feb54be","Type":"ContainerDied","Data":"15af73ffcb8b076a414d8d6a10ce7a3a25d6b8b42f27c224aa60cf656f8481d3"} Feb 03 11:06:23 crc kubenswrapper[5010]: I0203 11:06:23.896068 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hfbsh/crc-debug-dv2q8"] Feb 03 11:06:23 crc kubenswrapper[5010]: I0203 11:06:23.906061 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hfbsh/crc-debug-dv2q8"] Feb 03 11:06:24 crc kubenswrapper[5010]: I0203 11:06:24.451294 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" Feb 03 11:06:24 crc kubenswrapper[5010]: I0203 11:06:24.617870 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5gcr\" (UniqueName: \"kubernetes.io/projected/4cc3c54a-befe-4c86-8ae8-e0759feb54be-kube-api-access-v5gcr\") pod \"4cc3c54a-befe-4c86-8ae8-e0759feb54be\" (UID: \"4cc3c54a-befe-4c86-8ae8-e0759feb54be\") " Feb 03 11:06:24 crc kubenswrapper[5010]: I0203 11:06:24.618069 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cc3c54a-befe-4c86-8ae8-e0759feb54be-host\") pod \"4cc3c54a-befe-4c86-8ae8-e0759feb54be\" (UID: \"4cc3c54a-befe-4c86-8ae8-e0759feb54be\") " Feb 03 11:06:24 crc kubenswrapper[5010]: I0203 11:06:24.618206 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cc3c54a-befe-4c86-8ae8-e0759feb54be-host" (OuterVolumeSpecName: "host") pod "4cc3c54a-befe-4c86-8ae8-e0759feb54be" (UID: "4cc3c54a-befe-4c86-8ae8-e0759feb54be"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 11:06:24 crc kubenswrapper[5010]: I0203 11:06:24.619433 5010 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cc3c54a-befe-4c86-8ae8-e0759feb54be-host\") on node \"crc\" DevicePath \"\"" Feb 03 11:06:24 crc kubenswrapper[5010]: I0203 11:06:24.636587 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cc3c54a-befe-4c86-8ae8-e0759feb54be-kube-api-access-v5gcr" (OuterVolumeSpecName: "kube-api-access-v5gcr") pod "4cc3c54a-befe-4c86-8ae8-e0759feb54be" (UID: "4cc3c54a-befe-4c86-8ae8-e0759feb54be"). InnerVolumeSpecName "kube-api-access-v5gcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:06:24 crc kubenswrapper[5010]: I0203 11:06:24.721794 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5gcr\" (UniqueName: \"kubernetes.io/projected/4cc3c54a-befe-4c86-8ae8-e0759feb54be-kube-api-access-v5gcr\") on node \"crc\" DevicePath \"\"" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.096433 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hfbsh/crc-debug-5rbtp"] Feb 03 11:06:25 crc kubenswrapper[5010]: E0203 11:06:25.096976 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cc3c54a-befe-4c86-8ae8-e0759feb54be" containerName="container-00" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.096999 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cc3c54a-befe-4c86-8ae8-e0759feb54be" containerName="container-00" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.097231 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cc3c54a-befe-4c86-8ae8-e0759feb54be" containerName="container-00" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.098007 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/crc-debug-5rbtp" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.232748 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77lwf\" (UniqueName: \"kubernetes.io/projected/862810dd-615e-414c-96cd-45c3e36631c5-kube-api-access-77lwf\") pod \"crc-debug-5rbtp\" (UID: \"862810dd-615e-414c-96cd-45c3e36631c5\") " pod="openshift-must-gather-hfbsh/crc-debug-5rbtp" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.232907 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/862810dd-615e-414c-96cd-45c3e36631c5-host\") pod \"crc-debug-5rbtp\" (UID: \"862810dd-615e-414c-96cd-45c3e36631c5\") " pod="openshift-must-gather-hfbsh/crc-debug-5rbtp" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.334573 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77lwf\" (UniqueName: \"kubernetes.io/projected/862810dd-615e-414c-96cd-45c3e36631c5-kube-api-access-77lwf\") pod \"crc-debug-5rbtp\" (UID: \"862810dd-615e-414c-96cd-45c3e36631c5\") " pod="openshift-must-gather-hfbsh/crc-debug-5rbtp" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.335131 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/862810dd-615e-414c-96cd-45c3e36631c5-host\") pod \"crc-debug-5rbtp\" (UID: \"862810dd-615e-414c-96cd-45c3e36631c5\") " pod="openshift-must-gather-hfbsh/crc-debug-5rbtp" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.335202 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/862810dd-615e-414c-96cd-45c3e36631c5-host\") pod \"crc-debug-5rbtp\" (UID: \"862810dd-615e-414c-96cd-45c3e36631c5\") " pod="openshift-must-gather-hfbsh/crc-debug-5rbtp" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.344832 5010 scope.go:117] "RemoveContainer" containerID="15af73ffcb8b076a414d8d6a10ce7a3a25d6b8b42f27c224aa60cf656f8481d3" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.345013 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.352995 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77lwf\" (UniqueName: \"kubernetes.io/projected/862810dd-615e-414c-96cd-45c3e36631c5-kube-api-access-77lwf\") pod \"crc-debug-5rbtp\" (UID: \"862810dd-615e-414c-96cd-45c3e36631c5\") " pod="openshift-must-gather-hfbsh/crc-debug-5rbtp" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.417829 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/crc-debug-5rbtp" Feb 03 11:06:25 crc kubenswrapper[5010]: W0203 11:06:25.469429 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod862810dd_615e_414c_96cd_45c3e36631c5.slice/crio-2de5e4e984d0eb512f2e79d5fe584e2238824bb16d10676b1e8366de87309253 WatchSource:0}: Error finding container 2de5e4e984d0eb512f2e79d5fe584e2238824bb16d10676b1e8366de87309253: Status 404 returned error can't find the container with id 2de5e4e984d0eb512f2e79d5fe584e2238824bb16d10676b1e8366de87309253 Feb 03 11:06:26 crc kubenswrapper[5010]: I0203 11:06:26.356602 5010 generic.go:334] "Generic (PLEG): container finished" podID="862810dd-615e-414c-96cd-45c3e36631c5" containerID="eb7fe92d16e697b6743828ad5dc47e1b42d862e83b10caf5c522d84c6c42c336" exitCode=0 Feb 03 11:06:26 crc kubenswrapper[5010]: I0203 11:06:26.356718 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfbsh/crc-debug-5rbtp" event={"ID":"862810dd-615e-414c-96cd-45c3e36631c5","Type":"ContainerDied","Data":"eb7fe92d16e697b6743828ad5dc47e1b42d862e83b10caf5c522d84c6c42c336"} Feb 03 11:06:26 crc kubenswrapper[5010]: I0203 11:06:26.356970 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfbsh/crc-debug-5rbtp" event={"ID":"862810dd-615e-414c-96cd-45c3e36631c5","Type":"ContainerStarted","Data":"2de5e4e984d0eb512f2e79d5fe584e2238824bb16d10676b1e8366de87309253"} Feb 03 11:06:26 crc kubenswrapper[5010]: I0203 11:06:26.428907 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hfbsh/crc-debug-5rbtp"] Feb 03 11:06:26 crc kubenswrapper[5010]: I0203 11:06:26.437110 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hfbsh/crc-debug-5rbtp"] Feb 03 11:06:26 crc kubenswrapper[5010]: I0203 11:06:26.503036 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:06:26 crc kubenswrapper[5010]: E0203 11:06:26.503737 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:06:26 crc kubenswrapper[5010]: I0203 11:06:26.522424 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cc3c54a-befe-4c86-8ae8-e0759feb54be" path="/var/lib/kubelet/pods/4cc3c54a-befe-4c86-8ae8-e0759feb54be/volumes" Feb 03 11:06:27 crc kubenswrapper[5010]: I0203 11:06:27.469607 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/crc-debug-5rbtp" Feb 03 11:06:27 crc kubenswrapper[5010]: I0203 11:06:27.584646 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77lwf\" (UniqueName: \"kubernetes.io/projected/862810dd-615e-414c-96cd-45c3e36631c5-kube-api-access-77lwf\") pod \"862810dd-615e-414c-96cd-45c3e36631c5\" (UID: \"862810dd-615e-414c-96cd-45c3e36631c5\") " Feb 03 11:06:27 crc kubenswrapper[5010]: I0203 11:06:27.584771 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/862810dd-615e-414c-96cd-45c3e36631c5-host\") pod \"862810dd-615e-414c-96cd-45c3e36631c5\" (UID: \"862810dd-615e-414c-96cd-45c3e36631c5\") " Feb 03 11:06:27 crc kubenswrapper[5010]: I0203 11:06:27.585821 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/862810dd-615e-414c-96cd-45c3e36631c5-host" (OuterVolumeSpecName: "host") pod "862810dd-615e-414c-96cd-45c3e36631c5" (UID: "862810dd-615e-414c-96cd-45c3e36631c5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 11:06:27 crc kubenswrapper[5010]: I0203 11:06:27.599299 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/862810dd-615e-414c-96cd-45c3e36631c5-kube-api-access-77lwf" (OuterVolumeSpecName: "kube-api-access-77lwf") pod "862810dd-615e-414c-96cd-45c3e36631c5" (UID: "862810dd-615e-414c-96cd-45c3e36631c5"). InnerVolumeSpecName "kube-api-access-77lwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:06:27 crc kubenswrapper[5010]: I0203 11:06:27.687637 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77lwf\" (UniqueName: \"kubernetes.io/projected/862810dd-615e-414c-96cd-45c3e36631c5-kube-api-access-77lwf\") on node \"crc\" DevicePath \"\"" Feb 03 11:06:27 crc kubenswrapper[5010]: I0203 11:06:27.687695 5010 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/862810dd-615e-414c-96cd-45c3e36631c5-host\") on node \"crc\" DevicePath \"\"" Feb 03 11:06:28 crc kubenswrapper[5010]: I0203 11:06:28.381036 5010 scope.go:117] "RemoveContainer" containerID="eb7fe92d16e697b6743828ad5dc47e1b42d862e83b10caf5c522d84c6c42c336" Feb 03 11:06:28 crc kubenswrapper[5010]: I0203 11:06:28.381162 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/crc-debug-5rbtp" Feb 03 11:06:28 crc kubenswrapper[5010]: I0203 11:06:28.517893 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="862810dd-615e-414c-96cd-45c3e36631c5" path="/var/lib/kubelet/pods/862810dd-615e-414c-96cd-45c3e36631c5/volumes" Feb 03 11:06:40 crc kubenswrapper[5010]: I0203 11:06:40.519911 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:06:40 crc kubenswrapper[5010]: E0203 11:06:40.520694 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:06:44 crc kubenswrapper[5010]: I0203 11:06:44.405844 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6f67746f54-2l6b9_3bab826b-af5f-4bd1-a68a-0bdda5f89d80/barbican-api/0.log" Feb 03 11:06:44 crc kubenswrapper[5010]: I0203 11:06:44.464430 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6f67746f54-2l6b9_3bab826b-af5f-4bd1-a68a-0bdda5f89d80/barbican-api-log/0.log" Feb 03 11:06:44 crc kubenswrapper[5010]: I0203 11:06:44.602977 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-85855ff49d-76x8k_f377630f-64f3-4fd9-8449-53d739d775c2/barbican-keystone-listener/0.log" Feb 03 11:06:44 crc kubenswrapper[5010]: I0203 11:06:44.700125 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-85855ff49d-76x8k_f377630f-64f3-4fd9-8449-53d739d775c2/barbican-keystone-listener-log/0.log" Feb 03 11:06:44 crc kubenswrapper[5010]: I0203 11:06:44.756640 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6bdd746887-zr9j6_4cb276c1-b6b3-45ef-84be-8bae1d46d9d7/barbican-worker/0.log" Feb 03 11:06:44 crc kubenswrapper[5010]: I0203 11:06:44.810327 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6bdd746887-zr9j6_4cb276c1-b6b3-45ef-84be-8bae1d46d9d7/barbican-worker-log/0.log" Feb 03 11:06:44 crc kubenswrapper[5010]: I0203 11:06:44.949541 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf_2d389772-7902-4aca-8bc3-03a0708fbaa2/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:45 crc kubenswrapper[5010]: I0203 11:06:45.045325 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fe58e747-c39e-4370-93bc-f72f8c5ee95a/ceilometer-central-agent/0.log" Feb 03 11:06:45 crc kubenswrapper[5010]: I0203 11:06:45.172987 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fe58e747-c39e-4370-93bc-f72f8c5ee95a/proxy-httpd/0.log" Feb 03 11:06:45 crc kubenswrapper[5010]: I0203 11:06:45.197678 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fe58e747-c39e-4370-93bc-f72f8c5ee95a/ceilometer-notification-agent/0.log" Feb 03 11:06:45 crc kubenswrapper[5010]: I0203 11:06:45.209469 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fe58e747-c39e-4370-93bc-f72f8c5ee95a/sg-core/0.log" Feb 03 11:06:45 crc kubenswrapper[5010]: I0203 11:06:45.409531 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7e079d37-86a2-4be8-a16b-821095c780f0/cinder-api-log/0.log" Feb 03 11:06:45 crc kubenswrapper[5010]: I0203 11:06:45.491728 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7e079d37-86a2-4be8-a16b-821095c780f0/cinder-api/0.log" Feb 03 11:06:45 crc kubenswrapper[5010]: I0203 11:06:45.538586 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_63ed8c2d-6ac3-4a61-8e4c-1601efeca708/cinder-scheduler/0.log" Feb 03 11:06:45 crc kubenswrapper[5010]: I0203 11:06:45.685143 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_63ed8c2d-6ac3-4a61-8e4c-1601efeca708/probe/0.log" Feb 03 11:06:45 crc kubenswrapper[5010]: I0203 11:06:45.746107 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-5tffc_efb76028-3500-476c-adef-dfc87d2cdab7/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:45 crc kubenswrapper[5010]: I0203 11:06:45.953368 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-ktk67_f4e7c571-ff51-496f-81b8-2fee3f357d3f/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:46 crc kubenswrapper[5010]: I0203 11:06:46.028575 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-845df_3d935acc-a244-4c1f-a9f8-9924fa8b61f1/init/0.log" Feb 03 11:06:46 crc kubenswrapper[5010]: I0203 11:06:46.179015 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-845df_3d935acc-a244-4c1f-a9f8-9924fa8b61f1/init/0.log" Feb 03 11:06:46 crc kubenswrapper[5010]: I0203 11:06:46.217263 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-845df_3d935acc-a244-4c1f-a9f8-9924fa8b61f1/dnsmasq-dns/0.log" Feb 03 11:06:46 crc kubenswrapper[5010]: I0203 11:06:46.251596 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs_96722ef6-9c22-4700-8163-b25503d014bd/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:46 crc kubenswrapper[5010]: I0203 11:06:46.436774 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1769cccf-496c-4370-8e08-e1f156fecd77/glance-httpd/0.log" Feb 03 11:06:46 crc kubenswrapper[5010]: I0203 11:06:46.469162 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1769cccf-496c-4370-8e08-e1f156fecd77/glance-log/0.log" Feb 03 11:06:46 crc kubenswrapper[5010]: I0203 11:06:46.644841 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a/glance-httpd/0.log" Feb 03 11:06:46 crc kubenswrapper[5010]: I0203 11:06:46.696115 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a/glance-log/0.log" Feb 03 11:06:46 crc kubenswrapper[5010]: I0203 11:06:46.865838 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cc988db4-2mpfb_2fedcc57-b16c-4177-a10e-f627269b4adb/horizon/1.log" Feb 03 11:06:47 crc kubenswrapper[5010]: I0203 11:06:47.142279 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cc988db4-2mpfb_2fedcc57-b16c-4177-a10e-f627269b4adb/horizon/0.log" Feb 03 11:06:47 crc kubenswrapper[5010]: I0203 11:06:47.317960 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-msc5t_af6128d5-2369-4ef9-99aa-61ad0bf3b213/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:47 crc kubenswrapper[5010]: I0203 11:06:47.471816 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cc988db4-2mpfb_2fedcc57-b16c-4177-a10e-f627269b4adb/horizon-log/0.log" Feb 03 11:06:47 crc kubenswrapper[5010]: I0203 11:06:47.486884 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-hz8vx_49056616-86cd-41cd-a102-1072dc2a79f4/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:47 crc kubenswrapper[5010]: I0203 11:06:47.746554 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-675cc696d4-7wvtv_8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4/keystone-api/0.log" Feb 03 11:06:47 crc kubenswrapper[5010]: I0203 11:06:47.890757 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29501941-gv4sr_96c330a2-14f4-4923-8707-6b9cce98267f/keystone-cron/0.log" Feb 03 11:06:48 crc kubenswrapper[5010]: I0203 11:06:48.124804 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_de374df0-0b73-4be2-9719-d4b471782ed4/kube-state-metrics/0.log" Feb 03 11:06:48 crc kubenswrapper[5010]: I0203 11:06:48.169188 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d_5b7ff70c-1251-4fd5-a71c-bf6703bcc85d/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:48 crc kubenswrapper[5010]: I0203 11:06:48.603767 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-78c78c7889-r9575_158ac65e-849e-4f85-a4b6-1ac4bde1a1ec/neutron-api/0.log" Feb 03 11:06:48 crc kubenswrapper[5010]: I0203 11:06:48.717040 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-78c78c7889-r9575_158ac65e-849e-4f85-a4b6-1ac4bde1a1ec/neutron-httpd/0.log" Feb 03 11:06:48 crc kubenswrapper[5010]: I0203 11:06:48.814672 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p_4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:49 crc kubenswrapper[5010]: I0203 11:06:49.317547 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_26dec936-0343-4d5f-8f2b-cf2a797786b5/nova-cell0-conductor-conductor/0.log" Feb 03 11:06:49 crc kubenswrapper[5010]: I0203 11:06:49.343595 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_aba2689d-cd13-4601-ac45-69409c411839/nova-api-log/0.log" Feb 03 11:06:49 crc kubenswrapper[5010]: I0203 11:06:49.648131 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_aba2689d-cd13-4601-ac45-69409c411839/nova-api-api/0.log" Feb 03 11:06:49 crc kubenswrapper[5010]: I0203 11:06:49.820955 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c9bd4788-ae5f-49c4-8116-04076a16f4f1/nova-cell1-novncproxy-novncproxy/0.log" Feb 03 11:06:49 crc kubenswrapper[5010]: I0203 11:06:49.903002 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_291a9878-85fe-4988-8a7d-1da10ac49b23/nova-cell1-conductor-conductor/0.log" Feb 03 11:06:49 crc kubenswrapper[5010]: I0203 11:06:49.942706 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-bq7n5_6fd37dcf-e81a-491a-a5e1-01a27517d1b4/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:50 crc kubenswrapper[5010]: I0203 11:06:50.256996 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_edaaf3a7-a254-4a29-875a-643e46308f33/nova-metadata-log/0.log" Feb 03 11:06:50 crc kubenswrapper[5010]: I0203 11:06:50.530478 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_87eb5dd8-7171-457a-8a95-eda98893319a/mysql-bootstrap/0.log" Feb 03 11:06:50 crc kubenswrapper[5010]: I0203 11:06:50.574809 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_28559aae-4731-4653-a466-8c6f5c6c7dcf/nova-scheduler-scheduler/0.log" Feb 03 11:06:50 crc kubenswrapper[5010]: I0203 11:06:50.764235 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_87eb5dd8-7171-457a-8a95-eda98893319a/mysql-bootstrap/0.log" Feb 03 11:06:50 crc kubenswrapper[5010]: I0203 11:06:50.793454 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_87eb5dd8-7171-457a-8a95-eda98893319a/galera/0.log" Feb 03 11:06:50 crc kubenswrapper[5010]: I0203 11:06:50.996709 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_449f0b91-9186-4a16-b1b4-7f199b57a428/mysql-bootstrap/0.log" Feb 03 11:06:51 crc kubenswrapper[5010]: I0203 11:06:51.212461 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_449f0b91-9186-4a16-b1b4-7f199b57a428/mysql-bootstrap/0.log" Feb 03 11:06:51 crc kubenswrapper[5010]: I0203 11:06:51.217284 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_449f0b91-9186-4a16-b1b4-7f199b57a428/galera/0.log" Feb 03 11:06:51 crc kubenswrapper[5010]: I0203 11:06:51.430883 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c80632c0-72bc-461d-8e87-591d0ddbc1a8/openstackclient/0.log" Feb 03 11:06:51 crc kubenswrapper[5010]: I0203 11:06:51.551985 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vqkq5_5235b9fc-3723-4d8a-9851-e8ee89c0b084/openstack-network-exporter/0.log" Feb 03 11:06:51 crc kubenswrapper[5010]: I0203 11:06:51.562472 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_edaaf3a7-a254-4a29-875a-643e46308f33/nova-metadata-metadata/0.log" Feb 03 11:06:51 crc kubenswrapper[5010]: I0203 11:06:51.727686 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-krnr5_b2780eb3-7b7a-47fe-bda0-2605419df774/ovsdb-server-init/0.log" Feb 03 11:06:51 crc kubenswrapper[5010]: I0203 11:06:51.923896 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-krnr5_b2780eb3-7b7a-47fe-bda0-2605419df774/ovsdb-server-init/0.log" Feb 03 11:06:52 crc kubenswrapper[5010]: I0203 11:06:52.016751 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-krnr5_b2780eb3-7b7a-47fe-bda0-2605419df774/ovs-vswitchd/0.log" Feb 03 11:06:52 crc kubenswrapper[5010]: I0203 11:06:52.030498 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-krnr5_b2780eb3-7b7a-47fe-bda0-2605419df774/ovsdb-server/0.log" Feb 03 11:06:52 crc kubenswrapper[5010]: I0203 11:06:52.194010 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ql6ht_1883c30e-4c38-468d-a5dc-91b07f167d67/ovn-controller/0.log" Feb 03 11:06:52 crc kubenswrapper[5010]: I0203 11:06:52.361138 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-js9ms_a3aac34b-fb9e-4853-9a1d-c311dc75f055/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:52 crc kubenswrapper[5010]: I0203 11:06:52.453743 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5158e153-9918-4fce-8f2f-75a87b96562b/openstack-network-exporter/0.log" Feb 03 11:06:52 crc kubenswrapper[5010]: I0203 11:06:52.502976 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:06:52 crc kubenswrapper[5010]: E0203 11:06:52.503548 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:06:52 crc kubenswrapper[5010]: I0203 11:06:52.563487 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5158e153-9918-4fce-8f2f-75a87b96562b/ovn-northd/0.log" Feb 03 11:06:52 crc kubenswrapper[5010]: I0203 11:06:52.691895 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6d6abf1f-9905-4f96-8d44-d7ef3f9f299d/openstack-network-exporter/0.log" Feb 03 11:06:52 crc kubenswrapper[5010]: I0203 11:06:52.720679 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6d6abf1f-9905-4f96-8d44-d7ef3f9f299d/ovsdbserver-nb/0.log" Feb 03 11:06:52 crc kubenswrapper[5010]: I0203 11:06:52.889998 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6dfa0a64-db8a-457a-8eff-f27ffa8e02ce/ovsdbserver-sb/0.log" Feb 03 11:06:52 crc kubenswrapper[5010]: I0203 11:06:52.936681 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6dfa0a64-db8a-457a-8eff-f27ffa8e02ce/openstack-network-exporter/0.log" Feb 03 11:06:53 crc kubenswrapper[5010]: I0203 11:06:53.234171 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-bc6c5cf68-f9b4p_3ecd94c1-1faa-4acd-aa24-dd54388d2d99/placement-log/0.log" Feb 03 11:06:53 crc kubenswrapper[5010]: I0203 11:06:53.282812 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-bc6c5cf68-f9b4p_3ecd94c1-1faa-4acd-aa24-dd54388d2d99/placement-api/0.log" Feb 03 11:06:53 crc kubenswrapper[5010]: I0203 11:06:53.287048 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf/setup-container/0.log" Feb 03 11:06:53 crc kubenswrapper[5010]: I0203 11:06:53.594640 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf/rabbitmq/0.log" Feb 03 11:06:53 crc kubenswrapper[5010]: I0203 11:06:53.630154 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf/setup-container/0.log" Feb 03 11:06:53 crc kubenswrapper[5010]: I0203 11:06:53.652331 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_543f315d-d2f8-497f-a2c1-1a929c1611be/setup-container/0.log" Feb 03 11:06:53 crc kubenswrapper[5010]: I0203 11:06:53.802586 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_543f315d-d2f8-497f-a2c1-1a929c1611be/setup-container/0.log" Feb 03 11:06:53 crc kubenswrapper[5010]: I0203 11:06:53.823451 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_543f315d-d2f8-497f-a2c1-1a929c1611be/rabbitmq/0.log" Feb 03 11:06:53 crc kubenswrapper[5010]: I0203 11:06:53.903112 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt_d4357ef1-04ea-4dbd-acd8-70f34a5a72a1/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:54 crc kubenswrapper[5010]: I0203 11:06:54.099882 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-r8zqk_36d3f978-a301-44e6-a401-72e94c9f70ad/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:54 crc kubenswrapper[5010]: I0203 11:06:54.247706 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-mg749_43ecdc43-d866-4902-89cb-0ce68e89fe05/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:54 crc kubenswrapper[5010]: I0203 11:06:54.372966 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-nm955_a9fa7d27-81da-4dcd-adef-cb22c35d2641/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:54 crc kubenswrapper[5010]: I0203 11:06:54.551961 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-pfhx5_67a7675c-9074-4390-85ab-2bba845b2dc0/ssh-known-hosts-edpm-deployment/0.log" Feb 03 11:06:54 crc kubenswrapper[5010]: I0203 11:06:54.734544 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7594db59b7-8cg94_a0d01af0-abb7-4cd1-92d7-d741182948f9/proxy-httpd/0.log" Feb 03 11:06:54 crc kubenswrapper[5010]: I0203 11:06:54.761074 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7594db59b7-8cg94_a0d01af0-abb7-4cd1-92d7-d741182948f9/proxy-server/0.log" Feb 03 11:06:54 crc kubenswrapper[5010]: I0203 11:06:54.851704 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-n8qtn_65c9ffaf-83e3-47c1-a1e8-b097b371ccec/swift-ring-rebalance/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.006182 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/account-auditor/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.053502 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/account-replicator/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.157244 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/account-reaper/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.268405 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/account-server/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.269912 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/container-auditor/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.343578 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/container-replicator/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.383435 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/container-server/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.512977 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/container-updater/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.573798 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/object-expirer/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.629413 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/object-auditor/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.631791 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/object-replicator/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.753746 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/object-server/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.808808 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/object-updater/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.862501 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/rsync/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.911486 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/swift-recon-cron/0.log" Feb 03 11:06:56 crc kubenswrapper[5010]: I0203 11:06:56.122871 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h_7353ead1-b7ae-446c-a262-5a383b1d7e52/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:56 crc kubenswrapper[5010]: I0203 11:06:56.213681 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_8c8d92ab-5652-4bd9-81af-fd0be7aea36f/tempest-tests-tempest-tests-runner/0.log" Feb 03 11:06:56 crc kubenswrapper[5010]: I0203 11:06:56.390205 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_8dfa1254-0d2c-4885-a531-fc90541692e7/test-operator-logs-container/0.log" Feb 03 11:06:56 crc kubenswrapper[5010]: I0203 11:06:56.454317 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7_3109739d-69b7-439a-b6c4-a8affbe0af4f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:07:04 crc kubenswrapper[5010]: I0203 11:07:04.857317 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_95adc2d1-1093-484e-8580-53e244b420c8/memcached/0.log" Feb 03 11:07:05 crc kubenswrapper[5010]: I0203 11:07:05.502919 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:07:05 crc kubenswrapper[5010]: E0203 11:07:05.503288 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.758400 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6h57h"] Feb 03 11:07:17 crc kubenswrapper[5010]: E0203 11:07:17.759269 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="862810dd-615e-414c-96cd-45c3e36631c5" containerName="container-00" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.759283 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="862810dd-615e-414c-96cd-45c3e36631c5" containerName="container-00" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.759518 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="862810dd-615e-414c-96cd-45c3e36631c5" containerName="container-00" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.761767 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.821400 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p79nm\" (UniqueName: \"kubernetes.io/projected/e085b7a5-0035-41be-963b-d88937d4ddd3-kube-api-access-p79nm\") pod \"certified-operators-6h57h\" (UID: \"e085b7a5-0035-41be-963b-d88937d4ddd3\") " pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.821490 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e085b7a5-0035-41be-963b-d88937d4ddd3-catalog-content\") pod \"certified-operators-6h57h\" (UID: \"e085b7a5-0035-41be-963b-d88937d4ddd3\") " pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.821525 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e085b7a5-0035-41be-963b-d88937d4ddd3-utilities\") pod \"certified-operators-6h57h\" (UID: \"e085b7a5-0035-41be-963b-d88937d4ddd3\") " pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.859717 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6h57h"] Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.923652 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p79nm\" (UniqueName: \"kubernetes.io/projected/e085b7a5-0035-41be-963b-d88937d4ddd3-kube-api-access-p79nm\") pod \"certified-operators-6h57h\" (UID: \"e085b7a5-0035-41be-963b-d88937d4ddd3\") " pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.923731 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e085b7a5-0035-41be-963b-d88937d4ddd3-catalog-content\") pod \"certified-operators-6h57h\" (UID: \"e085b7a5-0035-41be-963b-d88937d4ddd3\") " pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.923769 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e085b7a5-0035-41be-963b-d88937d4ddd3-utilities\") pod \"certified-operators-6h57h\" (UID: \"e085b7a5-0035-41be-963b-d88937d4ddd3\") " pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.924490 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e085b7a5-0035-41be-963b-d88937d4ddd3-catalog-content\") pod \"certified-operators-6h57h\" (UID: \"e085b7a5-0035-41be-963b-d88937d4ddd3\") " pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.924544 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e085b7a5-0035-41be-963b-d88937d4ddd3-utilities\") pod \"certified-operators-6h57h\" (UID: \"e085b7a5-0035-41be-963b-d88937d4ddd3\") " pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.953411 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p79nm\" (UniqueName: \"kubernetes.io/projected/e085b7a5-0035-41be-963b-d88937d4ddd3-kube-api-access-p79nm\") pod \"certified-operators-6h57h\" (UID: \"e085b7a5-0035-41be-963b-d88937d4ddd3\") " pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:18 crc kubenswrapper[5010]: I0203 11:07:18.089804 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:18 crc kubenswrapper[5010]: I0203 11:07:18.630427 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6h57h"] Feb 03 11:07:18 crc kubenswrapper[5010]: I0203 11:07:18.991775 5010 generic.go:334] "Generic (PLEG): container finished" podID="e085b7a5-0035-41be-963b-d88937d4ddd3" containerID="91e9aca0c272ab123c758c427d2541dfcc7bb20ef8009f636498eb3c6518b54f" exitCode=0 Feb 03 11:07:18 crc kubenswrapper[5010]: I0203 11:07:18.991907 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6h57h" event={"ID":"e085b7a5-0035-41be-963b-d88937d4ddd3","Type":"ContainerDied","Data":"91e9aca0c272ab123c758c427d2541dfcc7bb20ef8009f636498eb3c6518b54f"} Feb 03 11:07:18 crc kubenswrapper[5010]: I0203 11:07:18.993557 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6h57h" event={"ID":"e085b7a5-0035-41be-963b-d88937d4ddd3","Type":"ContainerStarted","Data":"6fbc71d6cf4d21787d118de08f943a41757fb79167e2a84dc014c9c9697ac8eb"} Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.127679 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rkxrd"] Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.130195 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.158323 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rkxrd"] Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.258500 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhs6p\" (UniqueName: \"kubernetes.io/projected/9e992b66-8ed7-4652-811b-360f53059f2c-kube-api-access-mhs6p\") pod \"community-operators-rkxrd\" (UID: \"9e992b66-8ed7-4652-811b-360f53059f2c\") " pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.258588 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e992b66-8ed7-4652-811b-360f53059f2c-catalog-content\") pod \"community-operators-rkxrd\" (UID: \"9e992b66-8ed7-4652-811b-360f53059f2c\") " pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.258815 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e992b66-8ed7-4652-811b-360f53059f2c-utilities\") pod \"community-operators-rkxrd\" (UID: \"9e992b66-8ed7-4652-811b-360f53059f2c\") " pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.362019 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhs6p\" (UniqueName: \"kubernetes.io/projected/9e992b66-8ed7-4652-811b-360f53059f2c-kube-api-access-mhs6p\") pod \"community-operators-rkxrd\" (UID: \"9e992b66-8ed7-4652-811b-360f53059f2c\") " pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.362163 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e992b66-8ed7-4652-811b-360f53059f2c-catalog-content\") pod \"community-operators-rkxrd\" (UID: \"9e992b66-8ed7-4652-811b-360f53059f2c\") " pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.362264 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e992b66-8ed7-4652-811b-360f53059f2c-utilities\") pod \"community-operators-rkxrd\" (UID: \"9e992b66-8ed7-4652-811b-360f53059f2c\") " pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.363176 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e992b66-8ed7-4652-811b-360f53059f2c-utilities\") pod \"community-operators-rkxrd\" (UID: \"9e992b66-8ed7-4652-811b-360f53059f2c\") " pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.363870 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e992b66-8ed7-4652-811b-360f53059f2c-catalog-content\") pod \"community-operators-rkxrd\" (UID: \"9e992b66-8ed7-4652-811b-360f53059f2c\") " pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.392130 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhs6p\" (UniqueName: \"kubernetes.io/projected/9e992b66-8ed7-4652-811b-360f53059f2c-kube-api-access-mhs6p\") pod \"community-operators-rkxrd\" (UID: \"9e992b66-8ed7-4652-811b-360f53059f2c\") " pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.447944 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.503555 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:07:19 crc kubenswrapper[5010]: E0203 11:07:19.503762 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:07:20 crc kubenswrapper[5010]: I0203 11:07:20.037433 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6h57h" event={"ID":"e085b7a5-0035-41be-963b-d88937d4ddd3","Type":"ContainerStarted","Data":"6fd4c22f634db0fc88ae864cd01b6f4dd221fa0d24b2391d19db307f39023cc4"} Feb 03 11:07:20 crc kubenswrapper[5010]: I0203 11:07:20.200440 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rkxrd"] Feb 03 11:07:21 crc kubenswrapper[5010]: I0203 11:07:21.049916 5010 generic.go:334] "Generic (PLEG): container finished" podID="e085b7a5-0035-41be-963b-d88937d4ddd3" containerID="6fd4c22f634db0fc88ae864cd01b6f4dd221fa0d24b2391d19db307f39023cc4" exitCode=0 Feb 03 11:07:21 crc kubenswrapper[5010]: I0203 11:07:21.050006 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6h57h" event={"ID":"e085b7a5-0035-41be-963b-d88937d4ddd3","Type":"ContainerDied","Data":"6fd4c22f634db0fc88ae864cd01b6f4dd221fa0d24b2391d19db307f39023cc4"} Feb 03 11:07:21 crc kubenswrapper[5010]: I0203 11:07:21.054503 5010 generic.go:334] "Generic (PLEG): container finished" podID="9e992b66-8ed7-4652-811b-360f53059f2c" containerID="dfe79353cfa463c7902bc1d3fb2701622e0bb0dc6815e900fffca02fe49e111a" exitCode=0 Feb 03 11:07:21 crc kubenswrapper[5010]: I0203 11:07:21.054579 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkxrd" event={"ID":"9e992b66-8ed7-4652-811b-360f53059f2c","Type":"ContainerDied","Data":"dfe79353cfa463c7902bc1d3fb2701622e0bb0dc6815e900fffca02fe49e111a"} Feb 03 11:07:21 crc kubenswrapper[5010]: I0203 11:07:21.054721 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkxrd" event={"ID":"9e992b66-8ed7-4652-811b-360f53059f2c","Type":"ContainerStarted","Data":"221b78dd250fa3bbf0778a979aae37d7c6453448fa3f462783d5b97fb2924c8e"} Feb 03 11:07:22 crc kubenswrapper[5010]: I0203 11:07:22.064501 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkxrd" event={"ID":"9e992b66-8ed7-4652-811b-360f53059f2c","Type":"ContainerStarted","Data":"9871df3993621e2c07135c28cf748b6b7a1052c31af8b8652b4110c17727706a"} Feb 03 11:07:22 crc kubenswrapper[5010]: I0203 11:07:22.067199 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6h57h" event={"ID":"e085b7a5-0035-41be-963b-d88937d4ddd3","Type":"ContainerStarted","Data":"598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c"} Feb 03 11:07:22 crc kubenswrapper[5010]: I0203 11:07:22.112952 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6h57h" podStartSLOduration=2.648317458 podStartE2EDuration="5.112929272s" podCreationTimestamp="2026-02-03 11:07:17 +0000 UTC" firstStartedPulling="2026-02-03 11:07:18.994147915 +0000 UTC m=+3909.150124044" lastFinishedPulling="2026-02-03 11:07:21.458759729 +0000 UTC m=+3911.614735858" observedRunningTime="2026-02-03 11:07:22.110079352 +0000 UTC m=+3912.266055481" watchObservedRunningTime="2026-02-03 11:07:22.112929272 +0000 UTC m=+3912.268905411" Feb 03 11:07:23 crc kubenswrapper[5010]: I0203 11:07:23.078323 5010 generic.go:334] "Generic (PLEG): container finished" podID="9e992b66-8ed7-4652-811b-360f53059f2c" containerID="9871df3993621e2c07135c28cf748b6b7a1052c31af8b8652b4110c17727706a" exitCode=0 Feb 03 11:07:23 crc kubenswrapper[5010]: I0203 11:07:23.078803 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkxrd" event={"ID":"9e992b66-8ed7-4652-811b-360f53059f2c","Type":"ContainerDied","Data":"9871df3993621e2c07135c28cf748b6b7a1052c31af8b8652b4110c17727706a"} Feb 03 11:07:25 crc kubenswrapper[5010]: I0203 11:07:25.105176 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkxrd" event={"ID":"9e992b66-8ed7-4652-811b-360f53059f2c","Type":"ContainerStarted","Data":"e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e"} Feb 03 11:07:25 crc kubenswrapper[5010]: I0203 11:07:25.143056 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rkxrd" podStartSLOduration=3.246671906 podStartE2EDuration="6.143030417s" podCreationTimestamp="2026-02-03 11:07:19 +0000 UTC" firstStartedPulling="2026-02-03 11:07:21.055965167 +0000 UTC m=+3911.211941296" lastFinishedPulling="2026-02-03 11:07:23.952323678 +0000 UTC m=+3914.108299807" observedRunningTime="2026-02-03 11:07:25.13220344 +0000 UTC m=+3915.288179599" watchObservedRunningTime="2026-02-03 11:07:25.143030417 +0000 UTC m=+3915.299006556" Feb 03 11:07:26 crc kubenswrapper[5010]: I0203 11:07:26.026570 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/util/0.log" Feb 03 11:07:26 crc kubenswrapper[5010]: I0203 11:07:26.258204 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/util/0.log" Feb 03 11:07:26 crc kubenswrapper[5010]: I0203 11:07:26.271636 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/pull/0.log" Feb 03 11:07:26 crc kubenswrapper[5010]: I0203 11:07:26.392930 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/pull/0.log" Feb 03 11:07:26 crc kubenswrapper[5010]: I0203 11:07:26.574898 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/extract/0.log" Feb 03 11:07:26 crc kubenswrapper[5010]: I0203 11:07:26.591187 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/util/0.log" Feb 03 11:07:26 crc kubenswrapper[5010]: I0203 11:07:26.592735 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/pull/0.log" Feb 03 11:07:26 crc kubenswrapper[5010]: I0203 11:07:26.832397 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-jvb56_74803e29-48a3-4667-bcdb-a94f381545b5/manager/0.log" Feb 03 11:07:26 crc kubenswrapper[5010]: I0203 11:07:26.841683 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-52g72_a7d72ea1-7126-4768-9cf8-f590ebd216d7/manager/0.log" Feb 03 11:07:27 crc kubenswrapper[5010]: I0203 11:07:27.204674 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-j87lc_fd413d86-2cda-4079-a895-5cb60928a47f/manager/0.log" Feb 03 11:07:27 crc kubenswrapper[5010]: I0203 11:07:27.322096 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-gnxws_9fa8a872-8dc5-4e6d-838a-5dc54e6d4bbe/manager/0.log" Feb 03 11:07:27 crc kubenswrapper[5010]: I0203 11:07:27.439074 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-7szqs_d33dc0fd-847b-41cc-a8ac-afde40120ba2/manager/0.log" Feb 03 11:07:27 crc kubenswrapper[5010]: I0203 11:07:27.604607 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-k765q_9dc494bd-d6ef-4a22-8312-67750ebb3dbe/manager/0.log" Feb 03 11:07:27 crc kubenswrapper[5010]: I0203 11:07:27.845370 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-w7ldz_2f204595-5d98-4c16-b5d1-5004c6cae836/manager/0.log" Feb 03 11:07:27 crc kubenswrapper[5010]: I0203 11:07:27.926848 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-vlmtm_5fafda3f-e0cd-4477-9c10-442af83a835b/manager/0.log" Feb 03 11:07:28 crc kubenswrapper[5010]: I0203 11:07:28.090081 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:28 crc kubenswrapper[5010]: I0203 11:07:28.090362 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:28 crc kubenswrapper[5010]: I0203 11:07:28.158015 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:28 crc kubenswrapper[5010]: I0203 11:07:28.182749 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-gb8tp_1a136ea1-ab68-4f60-8fb2-969363f25337/manager/0.log" Feb 03 11:07:28 crc kubenswrapper[5010]: I0203 11:07:28.185724 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-qrkwl_7f20ca5f-d244-45be-864d-3b8ad3d456ea/manager/0.log" Feb 03 11:07:28 crc kubenswrapper[5010]: I0203 11:07:28.357327 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-5zbbw_42f76062-3a9d-45c1-b928-d9ca236ec8ab/manager/0.log" Feb 03 11:07:28 crc kubenswrapper[5010]: I0203 11:07:28.514732 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-pwdks_4f112d60-8db7-4ec2-a82d-c7627ade05a3/manager/0.log" Feb 03 11:07:28 crc kubenswrapper[5010]: I0203 11:07:28.653199 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-t47jc_21f46dec-fb01-4293-ad08-706eb63a8738/manager/0.log" Feb 03 11:07:28 crc kubenswrapper[5010]: I0203 11:07:28.774562 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-5lzr6_27ab6ab7-e411-466c-bc4a-97d1660c547e/manager/0.log" Feb 03 11:07:28 crc kubenswrapper[5010]: I0203 11:07:28.859653 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs_76bde002-75f6-4c4a-af3d-16aec5a221f4/manager/0.log" Feb 03 11:07:29 crc kubenswrapper[5010]: I0203 11:07:29.206799 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:29 crc kubenswrapper[5010]: I0203 11:07:29.213108 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-578f994c6c-72ld2_bde44bc9-c06a-4c2b-aad8-6f3247272024/operator/0.log" Feb 03 11:07:29 crc kubenswrapper[5010]: I0203 11:07:29.449436 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:29 crc kubenswrapper[5010]: I0203 11:07:29.450158 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:29 crc kubenswrapper[5010]: I0203 11:07:29.502428 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-fv5km_1e93c0a0-5a7b-40d7-aaee-e31455baf139/registry-server/0.log" Feb 03 11:07:29 crc kubenswrapper[5010]: I0203 11:07:29.713274 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6h57h"] Feb 03 11:07:29 crc kubenswrapper[5010]: I0203 11:07:29.842235 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-g8qz8_3e47047f-9303-47e2-8312-c83315e1a3ff/manager/0.log" Feb 03 11:07:29 crc kubenswrapper[5010]: I0203 11:07:29.875206 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-d99mj_8251c193-3c53-4651-87da-8b216cf907aa/manager/0.log" Feb 03 11:07:30 crc kubenswrapper[5010]: I0203 11:07:30.157132 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-kj7mj_2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a/operator/0.log" Feb 03 11:07:30 crc kubenswrapper[5010]: I0203 11:07:30.264813 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-844f879456-5ktjc_54aaeb1d-8a23-413f-b1f4-5115b167d78b/manager/0.log" Feb 03 11:07:30 crc kubenswrapper[5010]: I0203 11:07:30.369653 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-mrvfq_84af1f21-c29e-4846-9ce1-ea345cbad4fc/manager/0.log" Feb 03 11:07:30 crc kubenswrapper[5010]: I0203 11:07:30.471600 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-pgwx2_a62d6669-692b-4909-b192-4348ac82a50d/manager/0.log" Feb 03 11:07:30 crc kubenswrapper[5010]: I0203 11:07:30.516702 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-rkxrd" podUID="9e992b66-8ed7-4652-811b-360f53059f2c" containerName="registry-server" probeResult="failure" output=< Feb 03 11:07:30 crc kubenswrapper[5010]: timeout: failed to connect service ":50051" within 1s Feb 03 11:07:30 crc kubenswrapper[5010]: > Feb 03 11:07:30 crc kubenswrapper[5010]: I0203 11:07:30.545503 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-ck5g7_e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58/manager/0.log" Feb 03 11:07:30 crc kubenswrapper[5010]: I0203 11:07:30.713254 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-ftqqr_37a4f3fa-bbaf-433d-9835-6ac576351651/manager/0.log" Feb 03 11:07:31 crc kubenswrapper[5010]: I0203 11:07:31.162855 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6h57h" podUID="e085b7a5-0035-41be-963b-d88937d4ddd3" containerName="registry-server" containerID="cri-o://598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c" gracePeriod=2 Feb 03 11:07:31 crc kubenswrapper[5010]: I0203 11:07:31.649226 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:31 crc kubenswrapper[5010]: I0203 11:07:31.741102 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e085b7a5-0035-41be-963b-d88937d4ddd3-utilities\") pod \"e085b7a5-0035-41be-963b-d88937d4ddd3\" (UID: \"e085b7a5-0035-41be-963b-d88937d4ddd3\") " Feb 03 11:07:31 crc kubenswrapper[5010]: I0203 11:07:31.741287 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e085b7a5-0035-41be-963b-d88937d4ddd3-catalog-content\") pod \"e085b7a5-0035-41be-963b-d88937d4ddd3\" (UID: \"e085b7a5-0035-41be-963b-d88937d4ddd3\") " Feb 03 11:07:31 crc kubenswrapper[5010]: I0203 11:07:31.741326 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p79nm\" (UniqueName: \"kubernetes.io/projected/e085b7a5-0035-41be-963b-d88937d4ddd3-kube-api-access-p79nm\") pod \"e085b7a5-0035-41be-963b-d88937d4ddd3\" (UID: \"e085b7a5-0035-41be-963b-d88937d4ddd3\") " Feb 03 11:07:31 crc kubenswrapper[5010]: I0203 11:07:31.741868 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e085b7a5-0035-41be-963b-d88937d4ddd3-utilities" (OuterVolumeSpecName: "utilities") pod "e085b7a5-0035-41be-963b-d88937d4ddd3" (UID: "e085b7a5-0035-41be-963b-d88937d4ddd3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:07:31 crc kubenswrapper[5010]: I0203 11:07:31.748117 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e085b7a5-0035-41be-963b-d88937d4ddd3-kube-api-access-p79nm" (OuterVolumeSpecName: "kube-api-access-p79nm") pod "e085b7a5-0035-41be-963b-d88937d4ddd3" (UID: "e085b7a5-0035-41be-963b-d88937d4ddd3"). InnerVolumeSpecName "kube-api-access-p79nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:07:31 crc kubenswrapper[5010]: I0203 11:07:31.801822 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e085b7a5-0035-41be-963b-d88937d4ddd3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e085b7a5-0035-41be-963b-d88937d4ddd3" (UID: "e085b7a5-0035-41be-963b-d88937d4ddd3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:07:31 crc kubenswrapper[5010]: I0203 11:07:31.843559 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e085b7a5-0035-41be-963b-d88937d4ddd3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 11:07:31 crc kubenswrapper[5010]: I0203 11:07:31.843603 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p79nm\" (UniqueName: \"kubernetes.io/projected/e085b7a5-0035-41be-963b-d88937d4ddd3-kube-api-access-p79nm\") on node \"crc\" DevicePath \"\"" Feb 03 11:07:31 crc kubenswrapper[5010]: I0203 11:07:31.843621 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e085b7a5-0035-41be-963b-d88937d4ddd3-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.174130 5010 generic.go:334] "Generic (PLEG): container finished" podID="e085b7a5-0035-41be-963b-d88937d4ddd3" containerID="598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c" exitCode=0 Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.174183 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6h57h" event={"ID":"e085b7a5-0035-41be-963b-d88937d4ddd3","Type":"ContainerDied","Data":"598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c"} Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.174260 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6h57h" event={"ID":"e085b7a5-0035-41be-963b-d88937d4ddd3","Type":"ContainerDied","Data":"6fbc71d6cf4d21787d118de08f943a41757fb79167e2a84dc014c9c9697ac8eb"} Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.174284 5010 scope.go:117] "RemoveContainer" containerID="598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c" Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.174300 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.213625 5010 scope.go:117] "RemoveContainer" containerID="6fd4c22f634db0fc88ae864cd01b6f4dd221fa0d24b2391d19db307f39023cc4" Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.231472 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6h57h"] Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.242735 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6h57h"] Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.266903 5010 scope.go:117] "RemoveContainer" containerID="91e9aca0c272ab123c758c427d2541dfcc7bb20ef8009f636498eb3c6518b54f" Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.311351 5010 scope.go:117] "RemoveContainer" containerID="598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c" Feb 03 11:07:32 crc kubenswrapper[5010]: E0203 11:07:32.311878 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c\": container with ID starting with 598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c not found: ID does not exist" containerID="598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c" Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.311913 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c"} err="failed to get container status \"598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c\": rpc error: code = NotFound desc = could not find container \"598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c\": container with ID starting with 598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c not found: ID does not exist" Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.311936 5010 scope.go:117] "RemoveContainer" containerID="6fd4c22f634db0fc88ae864cd01b6f4dd221fa0d24b2391d19db307f39023cc4" Feb 03 11:07:32 crc kubenswrapper[5010]: E0203 11:07:32.312137 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fd4c22f634db0fc88ae864cd01b6f4dd221fa0d24b2391d19db307f39023cc4\": container with ID starting with 6fd4c22f634db0fc88ae864cd01b6f4dd221fa0d24b2391d19db307f39023cc4 not found: ID does not exist" containerID="6fd4c22f634db0fc88ae864cd01b6f4dd221fa0d24b2391d19db307f39023cc4" Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.312162 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fd4c22f634db0fc88ae864cd01b6f4dd221fa0d24b2391d19db307f39023cc4"} err="failed to get container status \"6fd4c22f634db0fc88ae864cd01b6f4dd221fa0d24b2391d19db307f39023cc4\": rpc error: code = NotFound desc = could not find container \"6fd4c22f634db0fc88ae864cd01b6f4dd221fa0d24b2391d19db307f39023cc4\": container with ID starting with 6fd4c22f634db0fc88ae864cd01b6f4dd221fa0d24b2391d19db307f39023cc4 not found: ID does not exist" Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.312179 5010 scope.go:117] "RemoveContainer" containerID="91e9aca0c272ab123c758c427d2541dfcc7bb20ef8009f636498eb3c6518b54f" Feb 03 11:07:32 crc kubenswrapper[5010]: E0203 11:07:32.312453 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91e9aca0c272ab123c758c427d2541dfcc7bb20ef8009f636498eb3c6518b54f\": container with ID starting with 91e9aca0c272ab123c758c427d2541dfcc7bb20ef8009f636498eb3c6518b54f not found: ID does not exist" containerID="91e9aca0c272ab123c758c427d2541dfcc7bb20ef8009f636498eb3c6518b54f" Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.312496 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91e9aca0c272ab123c758c427d2541dfcc7bb20ef8009f636498eb3c6518b54f"} err="failed to get container status \"91e9aca0c272ab123c758c427d2541dfcc7bb20ef8009f636498eb3c6518b54f\": rpc error: code = NotFound desc = could not find container \"91e9aca0c272ab123c758c427d2541dfcc7bb20ef8009f636498eb3c6518b54f\": container with ID starting with 91e9aca0c272ab123c758c427d2541dfcc7bb20ef8009f636498eb3c6518b54f not found: ID does not exist" Feb 03 11:07:32 crc kubenswrapper[5010]: E0203 11:07:32.380236 5010 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode085b7a5_0035_41be_963b_d88937d4ddd3.slice\": RecentStats: unable to find data in memory cache]" Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.518428 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e085b7a5-0035-41be-963b-d88937d4ddd3" path="/var/lib/kubelet/pods/e085b7a5-0035-41be-963b-d88937d4ddd3/volumes" Feb 03 11:07:34 crc kubenswrapper[5010]: I0203 11:07:34.502923 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:07:34 crc kubenswrapper[5010]: E0203 11:07:34.503454 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:07:39 crc kubenswrapper[5010]: I0203 11:07:39.512810 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:39 crc kubenswrapper[5010]: I0203 11:07:39.579401 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:39 crc kubenswrapper[5010]: I0203 11:07:39.755687 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rkxrd"] Feb 03 11:07:41 crc kubenswrapper[5010]: I0203 11:07:41.269205 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rkxrd" podUID="9e992b66-8ed7-4652-811b-360f53059f2c" containerName="registry-server" containerID="cri-o://e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e" gracePeriod=2 Feb 03 11:07:41 crc kubenswrapper[5010]: I0203 11:07:41.849721 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.035478 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhs6p\" (UniqueName: \"kubernetes.io/projected/9e992b66-8ed7-4652-811b-360f53059f2c-kube-api-access-mhs6p\") pod \"9e992b66-8ed7-4652-811b-360f53059f2c\" (UID: \"9e992b66-8ed7-4652-811b-360f53059f2c\") " Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.035741 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e992b66-8ed7-4652-811b-360f53059f2c-utilities\") pod \"9e992b66-8ed7-4652-811b-360f53059f2c\" (UID: \"9e992b66-8ed7-4652-811b-360f53059f2c\") " Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.035820 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e992b66-8ed7-4652-811b-360f53059f2c-catalog-content\") pod \"9e992b66-8ed7-4652-811b-360f53059f2c\" (UID: \"9e992b66-8ed7-4652-811b-360f53059f2c\") " Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.043904 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e992b66-8ed7-4652-811b-360f53059f2c-kube-api-access-mhs6p" (OuterVolumeSpecName: "kube-api-access-mhs6p") pod "9e992b66-8ed7-4652-811b-360f53059f2c" (UID: "9e992b66-8ed7-4652-811b-360f53059f2c"). InnerVolumeSpecName "kube-api-access-mhs6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.044504 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e992b66-8ed7-4652-811b-360f53059f2c-utilities" (OuterVolumeSpecName: "utilities") pod "9e992b66-8ed7-4652-811b-360f53059f2c" (UID: "9e992b66-8ed7-4652-811b-360f53059f2c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.093145 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e992b66-8ed7-4652-811b-360f53059f2c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e992b66-8ed7-4652-811b-360f53059f2c" (UID: "9e992b66-8ed7-4652-811b-360f53059f2c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.139201 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e992b66-8ed7-4652-811b-360f53059f2c-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.139296 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e992b66-8ed7-4652-811b-360f53059f2c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.139313 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhs6p\" (UniqueName: \"kubernetes.io/projected/9e992b66-8ed7-4652-811b-360f53059f2c-kube-api-access-mhs6p\") on node \"crc\" DevicePath \"\"" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.283506 5010 generic.go:334] "Generic (PLEG): container finished" podID="9e992b66-8ed7-4652-811b-360f53059f2c" containerID="e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e" exitCode=0 Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.283567 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkxrd" event={"ID":"9e992b66-8ed7-4652-811b-360f53059f2c","Type":"ContainerDied","Data":"e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e"} Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.283614 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkxrd" event={"ID":"9e992b66-8ed7-4652-811b-360f53059f2c","Type":"ContainerDied","Data":"221b78dd250fa3bbf0778a979aae37d7c6453448fa3f462783d5b97fb2924c8e"} Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.283635 5010 scope.go:117] "RemoveContainer" containerID="e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.283634 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.308334 5010 scope.go:117] "RemoveContainer" containerID="9871df3993621e2c07135c28cf748b6b7a1052c31af8b8652b4110c17727706a" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.333950 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rkxrd"] Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.343747 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rkxrd"] Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.347513 5010 scope.go:117] "RemoveContainer" containerID="dfe79353cfa463c7902bc1d3fb2701622e0bb0dc6815e900fffca02fe49e111a" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.397019 5010 scope.go:117] "RemoveContainer" containerID="e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e" Feb 03 11:07:42 crc kubenswrapper[5010]: E0203 11:07:42.397659 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e\": container with ID starting with e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e not found: ID does not exist" containerID="e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.397699 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e"} err="failed to get container status \"e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e\": rpc error: code = NotFound desc = could not find container \"e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e\": container with ID starting with e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e not found: ID does not exist" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.397732 5010 scope.go:117] "RemoveContainer" containerID="9871df3993621e2c07135c28cf748b6b7a1052c31af8b8652b4110c17727706a" Feb 03 11:07:42 crc kubenswrapper[5010]: E0203 11:07:42.397956 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9871df3993621e2c07135c28cf748b6b7a1052c31af8b8652b4110c17727706a\": container with ID starting with 9871df3993621e2c07135c28cf748b6b7a1052c31af8b8652b4110c17727706a not found: ID does not exist" containerID="9871df3993621e2c07135c28cf748b6b7a1052c31af8b8652b4110c17727706a" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.397986 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9871df3993621e2c07135c28cf748b6b7a1052c31af8b8652b4110c17727706a"} err="failed to get container status \"9871df3993621e2c07135c28cf748b6b7a1052c31af8b8652b4110c17727706a\": rpc error: code = NotFound desc = could not find container \"9871df3993621e2c07135c28cf748b6b7a1052c31af8b8652b4110c17727706a\": container with ID starting with 9871df3993621e2c07135c28cf748b6b7a1052c31af8b8652b4110c17727706a not found: ID does not exist" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.398005 5010 scope.go:117] "RemoveContainer" containerID="dfe79353cfa463c7902bc1d3fb2701622e0bb0dc6815e900fffca02fe49e111a" Feb 03 11:07:42 crc kubenswrapper[5010]: E0203 11:07:42.398266 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfe79353cfa463c7902bc1d3fb2701622e0bb0dc6815e900fffca02fe49e111a\": container with ID starting with dfe79353cfa463c7902bc1d3fb2701622e0bb0dc6815e900fffca02fe49e111a not found: ID does not exist" containerID="dfe79353cfa463c7902bc1d3fb2701622e0bb0dc6815e900fffca02fe49e111a" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.398296 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfe79353cfa463c7902bc1d3fb2701622e0bb0dc6815e900fffca02fe49e111a"} err="failed to get container status \"dfe79353cfa463c7902bc1d3fb2701622e0bb0dc6815e900fffca02fe49e111a\": rpc error: code = NotFound desc = could not find container \"dfe79353cfa463c7902bc1d3fb2701622e0bb0dc6815e900fffca02fe49e111a\": container with ID starting with dfe79353cfa463c7902bc1d3fb2701622e0bb0dc6815e900fffca02fe49e111a not found: ID does not exist" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.521763 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e992b66-8ed7-4652-811b-360f53059f2c" path="/var/lib/kubelet/pods/9e992b66-8ed7-4652-811b-360f53059f2c/volumes" Feb 03 11:07:45 crc kubenswrapper[5010]: I0203 11:07:45.502637 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:07:45 crc kubenswrapper[5010]: E0203 11:07:45.503989 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:07:55 crc kubenswrapper[5010]: I0203 11:07:55.455640 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xcpwg_ba766e4c-056f-4be6-a4b9-05592b641f87/control-plane-machine-set-operator/0.log" Feb 03 11:07:55 crc kubenswrapper[5010]: I0203 11:07:55.681910 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5mq4r_dc73dc6e-53ff-48b8-932e-d5aeb839f2dd/kube-rbac-proxy/0.log" Feb 03 11:07:55 crc kubenswrapper[5010]: I0203 11:07:55.756234 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5mq4r_dc73dc6e-53ff-48b8-932e-d5aeb839f2dd/machine-api-operator/0.log" Feb 03 11:07:57 crc kubenswrapper[5010]: I0203 11:07:57.502897 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:07:58 crc kubenswrapper[5010]: I0203 11:07:58.467578 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"ac78d23a14c3e413f9adbd91456af15e59e69a5cb21ee1b464426dbfabf685ce"} Feb 03 11:08:13 crc kubenswrapper[5010]: I0203 11:08:13.240509 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-wtwpn_7746ae6f-d9a0-4bba-a7bc-4920ed478ff4/cert-manager-controller/0.log" Feb 03 11:08:13 crc kubenswrapper[5010]: I0203 11:08:13.826171 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-b5ngd_b9d02d93-3df5-4e4a-99b3-07329087dc2c/cert-manager-cainjector/0.log" Feb 03 11:08:13 crc kubenswrapper[5010]: I0203 11:08:13.835813 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-bfc2c_26bf0193-c1b8-4018-a7e4-4429a4292dfb/cert-manager-webhook/0.log" Feb 03 11:08:32 crc kubenswrapper[5010]: I0203 11:08:32.863915 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-npjjg_a09e0456-1529-4ece-9266-d02a283d6bd1/nmstate-console-plugin/0.log" Feb 03 11:08:33 crc kubenswrapper[5010]: I0203 11:08:33.388890 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-55jg2_d47b696a-a1d0-4389-a099-7f375ab72f8c/nmstate-handler/0.log" Feb 03 11:08:33 crc kubenswrapper[5010]: I0203 11:08:33.414519 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hl7ls_552fa369-352c-4690-aa39-f0364021feae/nmstate-metrics/0.log" Feb 03 11:08:33 crc kubenswrapper[5010]: I0203 11:08:33.419364 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hl7ls_552fa369-352c-4690-aa39-f0364021feae/kube-rbac-proxy/0.log" Feb 03 11:08:33 crc kubenswrapper[5010]: I0203 11:08:33.645150 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-2xtg6_1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a/nmstate-webhook/0.log" Feb 03 11:08:33 crc kubenswrapper[5010]: I0203 11:08:33.684647 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-frs8s_e5c85e5b-ab19-414d-97e6-767b9e01f731/nmstate-operator/0.log" Feb 03 11:09:06 crc kubenswrapper[5010]: I0203 11:09:06.750225 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lpqgh_19f856e9-2325-41eb-8ed3-4daff562e84a/kube-rbac-proxy/0.log" Feb 03 11:09:06 crc kubenswrapper[5010]: I0203 11:09:06.934009 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lpqgh_19f856e9-2325-41eb-8ed3-4daff562e84a/controller/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.069853 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-frr-files/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.273557 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-metrics/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.283316 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-reloader/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.306240 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-frr-files/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.321283 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-reloader/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.470350 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-frr-files/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.518985 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-reloader/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.538347 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-metrics/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.559482 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-metrics/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.783385 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-metrics/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.785668 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-frr-files/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.798708 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-reloader/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.814880 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/controller/0.log" Feb 03 11:09:08 crc kubenswrapper[5010]: I0203 11:09:08.007083 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/kube-rbac-proxy/0.log" Feb 03 11:09:08 crc kubenswrapper[5010]: I0203 11:09:08.012481 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/kube-rbac-proxy-frr/0.log" Feb 03 11:09:08 crc kubenswrapper[5010]: I0203 11:09:08.015962 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/frr-metrics/0.log" Feb 03 11:09:08 crc kubenswrapper[5010]: I0203 11:09:08.244707 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-dbqxw_f6ea4a71-2a4d-48cd-9dda-ba453a1c8766/frr-k8s-webhook-server/0.log" Feb 03 11:09:08 crc kubenswrapper[5010]: I0203 11:09:08.296520 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/reloader/0.log" Feb 03 11:09:08 crc kubenswrapper[5010]: I0203 11:09:08.463990 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-76d7f7cd57-dncnc_5ec28393-ea76-4413-a903-612126368291/manager/0.log" Feb 03 11:09:08 crc kubenswrapper[5010]: I0203 11:09:08.666123 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5b857c8d44-88x9l_d90f33c9-1c81-4b74-a905-71aed9ecf222/webhook-server/0.log" Feb 03 11:09:08 crc kubenswrapper[5010]: I0203 11:09:08.778634 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-mlsql_72e88a76-8c59-4d07-813e-d7d505d14c3b/kube-rbac-proxy/0.log" Feb 03 11:09:09 crc kubenswrapper[5010]: I0203 11:09:09.345915 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-mlsql_72e88a76-8c59-4d07-813e-d7d505d14c3b/speaker/0.log" Feb 03 11:09:09 crc kubenswrapper[5010]: I0203 11:09:09.498556 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/frr/0.log" Feb 03 11:09:27 crc kubenswrapper[5010]: I0203 11:09:27.084491 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/util/0.log" Feb 03 11:09:27 crc kubenswrapper[5010]: I0203 11:09:27.309499 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/util/0.log" Feb 03 11:09:27 crc kubenswrapper[5010]: I0203 11:09:27.342363 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/pull/0.log" Feb 03 11:09:27 crc kubenswrapper[5010]: I0203 11:09:27.358735 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/pull/0.log" Feb 03 11:09:28 crc kubenswrapper[5010]: I0203 11:09:28.181463 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/pull/0.log" Feb 03 11:09:28 crc kubenswrapper[5010]: I0203 11:09:28.211361 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/extract/0.log" Feb 03 11:09:28 crc kubenswrapper[5010]: I0203 11:09:28.211423 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/util/0.log" Feb 03 11:09:28 crc kubenswrapper[5010]: I0203 11:09:28.402060 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/util/0.log" Feb 03 11:09:28 crc kubenswrapper[5010]: I0203 11:09:28.564955 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/util/0.log" Feb 03 11:09:28 crc kubenswrapper[5010]: I0203 11:09:28.611452 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/pull/0.log" Feb 03 11:09:28 crc kubenswrapper[5010]: I0203 11:09:28.629099 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/pull/0.log" Feb 03 11:09:28 crc kubenswrapper[5010]: I0203 11:09:28.819694 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/extract/0.log" Feb 03 11:09:28 crc kubenswrapper[5010]: I0203 11:09:28.835310 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/pull/0.log" Feb 03 11:09:28 crc kubenswrapper[5010]: I0203 11:09:28.844064 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/util/0.log" Feb 03 11:09:29 crc kubenswrapper[5010]: I0203 11:09:29.033849 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/extract-utilities/0.log" Feb 03 11:09:29 crc kubenswrapper[5010]: I0203 11:09:29.342457 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/extract-utilities/0.log" Feb 03 11:09:29 crc kubenswrapper[5010]: I0203 11:09:29.347416 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/extract-content/0.log" Feb 03 11:09:29 crc kubenswrapper[5010]: I0203 11:09:29.365084 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/extract-content/0.log" Feb 03 11:09:29 crc kubenswrapper[5010]: I0203 11:09:29.528752 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/extract-content/0.log" Feb 03 11:09:29 crc kubenswrapper[5010]: I0203 11:09:29.568944 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/extract-utilities/0.log" Feb 03 11:09:29 crc kubenswrapper[5010]: I0203 11:09:29.856577 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/extract-utilities/0.log" Feb 03 11:09:30 crc kubenswrapper[5010]: I0203 11:09:30.026112 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/extract-utilities/0.log" Feb 03 11:09:30 crc kubenswrapper[5010]: I0203 11:09:30.034391 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/extract-content/0.log" Feb 03 11:09:30 crc kubenswrapper[5010]: I0203 11:09:30.200858 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/extract-content/0.log" Feb 03 11:09:30 crc kubenswrapper[5010]: I0203 11:09:30.255041 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/registry-server/0.log" Feb 03 11:09:30 crc kubenswrapper[5010]: I0203 11:09:30.384921 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/extract-utilities/0.log" Feb 03 11:09:30 crc kubenswrapper[5010]: I0203 11:09:30.464770 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/extract-content/0.log" Feb 03 11:09:30 crc kubenswrapper[5010]: I0203 11:09:30.591865 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-lskbc_a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe/marketplace-operator/0.log" Feb 03 11:09:30 crc kubenswrapper[5010]: I0203 11:09:30.930151 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/registry-server/0.log" Feb 03 11:09:30 crc kubenswrapper[5010]: I0203 11:09:30.999565 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/extract-utilities/0.log" Feb 03 11:09:31 crc kubenswrapper[5010]: I0203 11:09:31.202400 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/extract-content/0.log" Feb 03 11:09:31 crc kubenswrapper[5010]: I0203 11:09:31.264143 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/extract-content/0.log" Feb 03 11:09:31 crc kubenswrapper[5010]: I0203 11:09:31.271043 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/extract-utilities/0.log" Feb 03 11:09:31 crc kubenswrapper[5010]: I0203 11:09:31.426532 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/extract-utilities/0.log" Feb 03 11:09:31 crc kubenswrapper[5010]: I0203 11:09:31.469907 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/extract-content/0.log" Feb 03 11:09:31 crc kubenswrapper[5010]: I0203 11:09:31.539047 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/extract-utilities/0.log" Feb 03 11:09:31 crc kubenswrapper[5010]: I0203 11:09:31.594028 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/registry-server/0.log" Feb 03 11:09:31 crc kubenswrapper[5010]: I0203 11:09:31.786606 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/extract-content/0.log" Feb 03 11:09:31 crc kubenswrapper[5010]: I0203 11:09:31.813604 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/extract-content/0.log" Feb 03 11:09:31 crc kubenswrapper[5010]: I0203 11:09:31.813715 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/extract-utilities/0.log" Feb 03 11:09:31 crc kubenswrapper[5010]: I0203 11:09:31.999145 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/extract-utilities/0.log" Feb 03 11:09:32 crc kubenswrapper[5010]: I0203 11:09:32.088325 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/extract-content/0.log" Feb 03 11:09:32 crc kubenswrapper[5010]: I0203 11:09:32.586150 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/registry-server/0.log" Feb 03 11:09:54 crc kubenswrapper[5010]: E0203 11:09:54.343979 5010 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.58:40796->38.102.83.58:33647: write tcp 38.102.83.58:40796->38.102.83.58:33647: write: broken pipe Feb 03 11:10:16 crc kubenswrapper[5010]: I0203 11:10:16.390358 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:10:16 crc kubenswrapper[5010]: I0203 11:10:16.391397 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 11:10:46 crc kubenswrapper[5010]: I0203 11:10:46.390595 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:10:46 crc kubenswrapper[5010]: I0203 11:10:46.391672 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 11:11:16 crc kubenswrapper[5010]: I0203 11:11:16.392402 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:11:16 crc kubenswrapper[5010]: I0203 11:11:16.393325 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 11:11:16 crc kubenswrapper[5010]: I0203 11:11:16.393397 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 11:11:16 crc kubenswrapper[5010]: I0203 11:11:16.394444 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac78d23a14c3e413f9adbd91456af15e59e69a5cb21ee1b464426dbfabf685ce"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 11:11:16 crc kubenswrapper[5010]: I0203 11:11:16.394519 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://ac78d23a14c3e413f9adbd91456af15e59e69a5cb21ee1b464426dbfabf685ce" gracePeriod=600 Feb 03 11:11:17 crc kubenswrapper[5010]: I0203 11:11:17.225472 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="ac78d23a14c3e413f9adbd91456af15e59e69a5cb21ee1b464426dbfabf685ce" exitCode=0 Feb 03 11:11:17 crc kubenswrapper[5010]: I0203 11:11:17.225557 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"ac78d23a14c3e413f9adbd91456af15e59e69a5cb21ee1b464426dbfabf685ce"} Feb 03 11:11:17 crc kubenswrapper[5010]: I0203 11:11:17.225952 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938"} Feb 03 11:11:17 crc kubenswrapper[5010]: I0203 11:11:17.226036 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.029534 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5dghg"] Feb 03 11:11:25 crc kubenswrapper[5010]: E0203 11:11:25.031544 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e085b7a5-0035-41be-963b-d88937d4ddd3" containerName="extract-content" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.031566 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="e085b7a5-0035-41be-963b-d88937d4ddd3" containerName="extract-content" Feb 03 11:11:25 crc kubenswrapper[5010]: E0203 11:11:25.031585 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e992b66-8ed7-4652-811b-360f53059f2c" containerName="registry-server" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.031595 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e992b66-8ed7-4652-811b-360f53059f2c" containerName="registry-server" Feb 03 11:11:25 crc kubenswrapper[5010]: E0203 11:11:25.031621 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e085b7a5-0035-41be-963b-d88937d4ddd3" containerName="extract-utilities" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.031630 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="e085b7a5-0035-41be-963b-d88937d4ddd3" containerName="extract-utilities" Feb 03 11:11:25 crc kubenswrapper[5010]: E0203 11:11:25.031649 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e992b66-8ed7-4652-811b-360f53059f2c" containerName="extract-content" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.031657 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e992b66-8ed7-4652-811b-360f53059f2c" containerName="extract-content" Feb 03 11:11:25 crc kubenswrapper[5010]: E0203 11:11:25.031691 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e992b66-8ed7-4652-811b-360f53059f2c" containerName="extract-utilities" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.031699 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e992b66-8ed7-4652-811b-360f53059f2c" containerName="extract-utilities" Feb 03 11:11:25 crc kubenswrapper[5010]: E0203 11:11:25.031717 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e085b7a5-0035-41be-963b-d88937d4ddd3" containerName="registry-server" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.031725 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="e085b7a5-0035-41be-963b-d88937d4ddd3" containerName="registry-server" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.032064 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e992b66-8ed7-4652-811b-360f53059f2c" containerName="registry-server" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.032084 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="e085b7a5-0035-41be-963b-d88937d4ddd3" containerName="registry-server" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.034269 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.048091 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5dghg"] Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.191798 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14281e11-e2f8-462e-91e3-ad1c46fa575f-utilities\") pod \"redhat-operators-5dghg\" (UID: \"14281e11-e2f8-462e-91e3-ad1c46fa575f\") " pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.191941 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14281e11-e2f8-462e-91e3-ad1c46fa575f-catalog-content\") pod \"redhat-operators-5dghg\" (UID: \"14281e11-e2f8-462e-91e3-ad1c46fa575f\") " pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.192006 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsjjp\" (UniqueName: \"kubernetes.io/projected/14281e11-e2f8-462e-91e3-ad1c46fa575f-kube-api-access-tsjjp\") pod \"redhat-operators-5dghg\" (UID: \"14281e11-e2f8-462e-91e3-ad1c46fa575f\") " pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.294716 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14281e11-e2f8-462e-91e3-ad1c46fa575f-catalog-content\") pod \"redhat-operators-5dghg\" (UID: \"14281e11-e2f8-462e-91e3-ad1c46fa575f\") " pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.294811 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsjjp\" (UniqueName: \"kubernetes.io/projected/14281e11-e2f8-462e-91e3-ad1c46fa575f-kube-api-access-tsjjp\") pod \"redhat-operators-5dghg\" (UID: \"14281e11-e2f8-462e-91e3-ad1c46fa575f\") " pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.294952 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14281e11-e2f8-462e-91e3-ad1c46fa575f-utilities\") pod \"redhat-operators-5dghg\" (UID: \"14281e11-e2f8-462e-91e3-ad1c46fa575f\") " pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.295811 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14281e11-e2f8-462e-91e3-ad1c46fa575f-catalog-content\") pod \"redhat-operators-5dghg\" (UID: \"14281e11-e2f8-462e-91e3-ad1c46fa575f\") " pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.295840 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14281e11-e2f8-462e-91e3-ad1c46fa575f-utilities\") pod \"redhat-operators-5dghg\" (UID: \"14281e11-e2f8-462e-91e3-ad1c46fa575f\") " pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.320866 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsjjp\" (UniqueName: \"kubernetes.io/projected/14281e11-e2f8-462e-91e3-ad1c46fa575f-kube-api-access-tsjjp\") pod \"redhat-operators-5dghg\" (UID: \"14281e11-e2f8-462e-91e3-ad1c46fa575f\") " pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.361547 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.899983 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5dghg"] Feb 03 11:11:26 crc kubenswrapper[5010]: I0203 11:11:26.323160 5010 generic.go:334] "Generic (PLEG): container finished" podID="14281e11-e2f8-462e-91e3-ad1c46fa575f" containerID="2edd458b2cfaa2b6e29690d9b6dedd98ec6688b7df796df1d92ea15b8aa6954c" exitCode=0 Feb 03 11:11:26 crc kubenswrapper[5010]: I0203 11:11:26.323404 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dghg" event={"ID":"14281e11-e2f8-462e-91e3-ad1c46fa575f","Type":"ContainerDied","Data":"2edd458b2cfaa2b6e29690d9b6dedd98ec6688b7df796df1d92ea15b8aa6954c"} Feb 03 11:11:26 crc kubenswrapper[5010]: I0203 11:11:26.323481 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dghg" event={"ID":"14281e11-e2f8-462e-91e3-ad1c46fa575f","Type":"ContainerStarted","Data":"c325c7a4482ba02c5a1e03254cfa81223b26a9652eb1ea3e709a042cf8c205a0"} Feb 03 11:11:26 crc kubenswrapper[5010]: I0203 11:11:26.325924 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 11:11:29 crc kubenswrapper[5010]: I0203 11:11:29.396614 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dghg" event={"ID":"14281e11-e2f8-462e-91e3-ad1c46fa575f","Type":"ContainerStarted","Data":"306bee7e759854f6a192fe0ffdf5df25e12e0a3028ac1c2be5e4c36d51b30a5f"} Feb 03 11:11:30 crc kubenswrapper[5010]: I0203 11:11:30.407581 5010 generic.go:334] "Generic (PLEG): container finished" podID="14281e11-e2f8-462e-91e3-ad1c46fa575f" containerID="306bee7e759854f6a192fe0ffdf5df25e12e0a3028ac1c2be5e4c36d51b30a5f" exitCode=0 Feb 03 11:11:30 crc kubenswrapper[5010]: I0203 11:11:30.407844 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dghg" event={"ID":"14281e11-e2f8-462e-91e3-ad1c46fa575f","Type":"ContainerDied","Data":"306bee7e759854f6a192fe0ffdf5df25e12e0a3028ac1c2be5e4c36d51b30a5f"} Feb 03 11:11:31 crc kubenswrapper[5010]: I0203 11:11:31.424206 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dghg" event={"ID":"14281e11-e2f8-462e-91e3-ad1c46fa575f","Type":"ContainerStarted","Data":"3bd849a4e703cdb76aecc93972aa5f7990799fc9bee08fac17023aef5ff87483"} Feb 03 11:11:31 crc kubenswrapper[5010]: I0203 11:11:31.462825 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5dghg" podStartSLOduration=1.9570759149999999 podStartE2EDuration="6.462792619s" podCreationTimestamp="2026-02-03 11:11:25 +0000 UTC" firstStartedPulling="2026-02-03 11:11:26.325631954 +0000 UTC m=+4156.481608083" lastFinishedPulling="2026-02-03 11:11:30.831348658 +0000 UTC m=+4160.987324787" observedRunningTime="2026-02-03 11:11:31.460033101 +0000 UTC m=+4161.616009230" watchObservedRunningTime="2026-02-03 11:11:31.462792619 +0000 UTC m=+4161.618768748" Feb 03 11:11:35 crc kubenswrapper[5010]: I0203 11:11:35.361899 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:35 crc kubenswrapper[5010]: I0203 11:11:35.362603 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:36 crc kubenswrapper[5010]: I0203 11:11:36.438226 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5dghg" podUID="14281e11-e2f8-462e-91e3-ad1c46fa575f" containerName="registry-server" probeResult="failure" output=< Feb 03 11:11:36 crc kubenswrapper[5010]: timeout: failed to connect service ":50051" within 1s Feb 03 11:11:36 crc kubenswrapper[5010]: > Feb 03 11:11:45 crc kubenswrapper[5010]: I0203 11:11:45.423057 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:45 crc kubenswrapper[5010]: I0203 11:11:45.499076 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.238274 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5dghg"] Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.239019 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5dghg" podUID="14281e11-e2f8-462e-91e3-ad1c46fa575f" containerName="registry-server" containerID="cri-o://3bd849a4e703cdb76aecc93972aa5f7990799fc9bee08fac17023aef5ff87483" gracePeriod=2 Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.626617 5010 generic.go:334] "Generic (PLEG): container finished" podID="14281e11-e2f8-462e-91e3-ad1c46fa575f" containerID="3bd849a4e703cdb76aecc93972aa5f7990799fc9bee08fac17023aef5ff87483" exitCode=0 Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.626707 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dghg" event={"ID":"14281e11-e2f8-462e-91e3-ad1c46fa575f","Type":"ContainerDied","Data":"3bd849a4e703cdb76aecc93972aa5f7990799fc9bee08fac17023aef5ff87483"} Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.627097 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dghg" event={"ID":"14281e11-e2f8-462e-91e3-ad1c46fa575f","Type":"ContainerDied","Data":"c325c7a4482ba02c5a1e03254cfa81223b26a9652eb1ea3e709a042cf8c205a0"} Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.627120 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c325c7a4482ba02c5a1e03254cfa81223b26a9652eb1ea3e709a042cf8c205a0" Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.731421 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.907716 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsjjp\" (UniqueName: \"kubernetes.io/projected/14281e11-e2f8-462e-91e3-ad1c46fa575f-kube-api-access-tsjjp\") pod \"14281e11-e2f8-462e-91e3-ad1c46fa575f\" (UID: \"14281e11-e2f8-462e-91e3-ad1c46fa575f\") " Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.907819 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14281e11-e2f8-462e-91e3-ad1c46fa575f-catalog-content\") pod \"14281e11-e2f8-462e-91e3-ad1c46fa575f\" (UID: \"14281e11-e2f8-462e-91e3-ad1c46fa575f\") " Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.908030 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14281e11-e2f8-462e-91e3-ad1c46fa575f-utilities\") pod \"14281e11-e2f8-462e-91e3-ad1c46fa575f\" (UID: \"14281e11-e2f8-462e-91e3-ad1c46fa575f\") " Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.909638 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14281e11-e2f8-462e-91e3-ad1c46fa575f-utilities" (OuterVolumeSpecName: "utilities") pod "14281e11-e2f8-462e-91e3-ad1c46fa575f" (UID: "14281e11-e2f8-462e-91e3-ad1c46fa575f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.910460 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14281e11-e2f8-462e-91e3-ad1c46fa575f-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.924571 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14281e11-e2f8-462e-91e3-ad1c46fa575f-kube-api-access-tsjjp" (OuterVolumeSpecName: "kube-api-access-tsjjp") pod "14281e11-e2f8-462e-91e3-ad1c46fa575f" (UID: "14281e11-e2f8-462e-91e3-ad1c46fa575f"). InnerVolumeSpecName "kube-api-access-tsjjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:11:48 crc kubenswrapper[5010]: I0203 11:11:48.016434 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsjjp\" (UniqueName: \"kubernetes.io/projected/14281e11-e2f8-462e-91e3-ad1c46fa575f-kube-api-access-tsjjp\") on node \"crc\" DevicePath \"\"" Feb 03 11:11:48 crc kubenswrapper[5010]: I0203 11:11:48.045507 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14281e11-e2f8-462e-91e3-ad1c46fa575f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14281e11-e2f8-462e-91e3-ad1c46fa575f" (UID: "14281e11-e2f8-462e-91e3-ad1c46fa575f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:11:48 crc kubenswrapper[5010]: I0203 11:11:48.119892 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14281e11-e2f8-462e-91e3-ad1c46fa575f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 11:11:48 crc kubenswrapper[5010]: I0203 11:11:48.640994 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:48 crc kubenswrapper[5010]: I0203 11:11:48.680316 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5dghg"] Feb 03 11:11:48 crc kubenswrapper[5010]: I0203 11:11:48.693556 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5dghg"] Feb 03 11:11:50 crc kubenswrapper[5010]: I0203 11:11:50.524178 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14281e11-e2f8-462e-91e3-ad1c46fa575f" path="/var/lib/kubelet/pods/14281e11-e2f8-462e-91e3-ad1c46fa575f/volumes" Feb 03 11:11:51 crc kubenswrapper[5010]: I0203 11:11:51.676909 5010 generic.go:334] "Generic (PLEG): container finished" podID="a60388dd-8e4d-463c-a5da-b210ae7c19fd" containerID="f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96" exitCode=0 Feb 03 11:11:51 crc kubenswrapper[5010]: I0203 11:11:51.677057 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfbsh/must-gather-hdcmp" event={"ID":"a60388dd-8e4d-463c-a5da-b210ae7c19fd","Type":"ContainerDied","Data":"f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96"} Feb 03 11:11:51 crc kubenswrapper[5010]: I0203 11:11:51.678619 5010 scope.go:117] "RemoveContainer" containerID="f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96" Feb 03 11:11:52 crc kubenswrapper[5010]: I0203 11:11:52.671035 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hfbsh_must-gather-hdcmp_a60388dd-8e4d-463c-a5da-b210ae7c19fd/gather/0.log" Feb 03 11:12:00 crc kubenswrapper[5010]: I0203 11:12:00.943948 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hfbsh/must-gather-hdcmp"] Feb 03 11:12:00 crc kubenswrapper[5010]: I0203 11:12:00.945716 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-hfbsh/must-gather-hdcmp" podUID="a60388dd-8e4d-463c-a5da-b210ae7c19fd" containerName="copy" containerID="cri-o://d0ca9d650c03f28692690ebdf474ad1d46e17199923f41abd227022ab4dd0774" gracePeriod=2 Feb 03 11:12:00 crc kubenswrapper[5010]: I0203 11:12:00.956369 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hfbsh/must-gather-hdcmp"] Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.414244 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hfbsh_must-gather-hdcmp_a60388dd-8e4d-463c-a5da-b210ae7c19fd/copy/0.log" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.415097 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/must-gather-hdcmp" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.494923 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a60388dd-8e4d-463c-a5da-b210ae7c19fd-must-gather-output\") pod \"a60388dd-8e4d-463c-a5da-b210ae7c19fd\" (UID: \"a60388dd-8e4d-463c-a5da-b210ae7c19fd\") " Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.495177 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4jzz\" (UniqueName: \"kubernetes.io/projected/a60388dd-8e4d-463c-a5da-b210ae7c19fd-kube-api-access-t4jzz\") pod \"a60388dd-8e4d-463c-a5da-b210ae7c19fd\" (UID: \"a60388dd-8e4d-463c-a5da-b210ae7c19fd\") " Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.503158 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a60388dd-8e4d-463c-a5da-b210ae7c19fd-kube-api-access-t4jzz" (OuterVolumeSpecName: "kube-api-access-t4jzz") pod "a60388dd-8e4d-463c-a5da-b210ae7c19fd" (UID: "a60388dd-8e4d-463c-a5da-b210ae7c19fd"). InnerVolumeSpecName "kube-api-access-t4jzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.598654 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4jzz\" (UniqueName: \"kubernetes.io/projected/a60388dd-8e4d-463c-a5da-b210ae7c19fd-kube-api-access-t4jzz\") on node \"crc\" DevicePath \"\"" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.664273 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a60388dd-8e4d-463c-a5da-b210ae7c19fd-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a60388dd-8e4d-463c-a5da-b210ae7c19fd" (UID: "a60388dd-8e4d-463c-a5da-b210ae7c19fd"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.700577 5010 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a60388dd-8e4d-463c-a5da-b210ae7c19fd-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.810993 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hfbsh_must-gather-hdcmp_a60388dd-8e4d-463c-a5da-b210ae7c19fd/copy/0.log" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.811561 5010 generic.go:334] "Generic (PLEG): container finished" podID="a60388dd-8e4d-463c-a5da-b210ae7c19fd" containerID="d0ca9d650c03f28692690ebdf474ad1d46e17199923f41abd227022ab4dd0774" exitCode=143 Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.811653 5010 scope.go:117] "RemoveContainer" containerID="d0ca9d650c03f28692690ebdf474ad1d46e17199923f41abd227022ab4dd0774" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.811896 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/must-gather-hdcmp" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.862763 5010 scope.go:117] "RemoveContainer" containerID="f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.921660 5010 scope.go:117] "RemoveContainer" containerID="d0ca9d650c03f28692690ebdf474ad1d46e17199923f41abd227022ab4dd0774" Feb 03 11:12:01 crc kubenswrapper[5010]: E0203 11:12:01.931540 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0ca9d650c03f28692690ebdf474ad1d46e17199923f41abd227022ab4dd0774\": container with ID starting with d0ca9d650c03f28692690ebdf474ad1d46e17199923f41abd227022ab4dd0774 not found: ID does not exist" containerID="d0ca9d650c03f28692690ebdf474ad1d46e17199923f41abd227022ab4dd0774" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.931608 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0ca9d650c03f28692690ebdf474ad1d46e17199923f41abd227022ab4dd0774"} err="failed to get container status \"d0ca9d650c03f28692690ebdf474ad1d46e17199923f41abd227022ab4dd0774\": rpc error: code = NotFound desc = could not find container \"d0ca9d650c03f28692690ebdf474ad1d46e17199923f41abd227022ab4dd0774\": container with ID starting with d0ca9d650c03f28692690ebdf474ad1d46e17199923f41abd227022ab4dd0774 not found: ID does not exist" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.931645 5010 scope.go:117] "RemoveContainer" containerID="f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96" Feb 03 11:12:01 crc kubenswrapper[5010]: E0203 11:12:01.932260 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96\": container with ID starting with f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96 not found: ID does not exist" containerID="f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.932315 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96"} err="failed to get container status \"f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96\": rpc error: code = NotFound desc = could not find container \"f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96\": container with ID starting with f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96 not found: ID does not exist" Feb 03 11:12:02 crc kubenswrapper[5010]: I0203 11:12:02.514826 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a60388dd-8e4d-463c-a5da-b210ae7c19fd" path="/var/lib/kubelet/pods/a60388dd-8e4d-463c-a5da-b210ae7c19fd/volumes" Feb 03 11:13:16 crc kubenswrapper[5010]: I0203 11:13:16.390615 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:13:16 crc kubenswrapper[5010]: I0203 11:13:16.391198 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 11:13:46 crc kubenswrapper[5010]: I0203 11:13:46.390240 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:13:46 crc kubenswrapper[5010]: I0203 11:13:46.391144 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 11:14:16 crc kubenswrapper[5010]: I0203 11:14:16.390865 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:14:16 crc kubenswrapper[5010]: I0203 11:14:16.391714 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 11:14:16 crc kubenswrapper[5010]: I0203 11:14:16.391791 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 11:14:16 crc kubenswrapper[5010]: I0203 11:14:16.394875 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 11:14:16 crc kubenswrapper[5010]: I0203 11:14:16.394996 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" gracePeriod=600 Feb 03 11:14:16 crc kubenswrapper[5010]: I0203 11:14:16.557202 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" exitCode=0 Feb 03 11:14:16 crc kubenswrapper[5010]: I0203 11:14:16.557277 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938"} Feb 03 11:14:16 crc kubenswrapper[5010]: I0203 11:14:16.557316 5010 scope.go:117] "RemoveContainer" containerID="ac78d23a14c3e413f9adbd91456af15e59e69a5cb21ee1b464426dbfabf685ce" Feb 03 11:14:16 crc kubenswrapper[5010]: E0203 11:14:16.683247 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:14:17 crc kubenswrapper[5010]: I0203 11:14:17.601521 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:14:17 crc kubenswrapper[5010]: E0203 11:14:17.601837 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:14:30 crc kubenswrapper[5010]: I0203 11:14:30.530277 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:14:30 crc kubenswrapper[5010]: E0203 11:14:30.531992 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:14:45 crc kubenswrapper[5010]: I0203 11:14:45.502115 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:14:45 crc kubenswrapper[5010]: E0203 11:14:45.503294 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.100910 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mcw6z/must-gather-xf96m"] Feb 03 11:14:51 crc kubenswrapper[5010]: E0203 11:14:51.107119 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14281e11-e2f8-462e-91e3-ad1c46fa575f" containerName="extract-utilities" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.107229 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="14281e11-e2f8-462e-91e3-ad1c46fa575f" containerName="extract-utilities" Feb 03 11:14:51 crc kubenswrapper[5010]: E0203 11:14:51.107321 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a60388dd-8e4d-463c-a5da-b210ae7c19fd" containerName="copy" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.107378 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a60388dd-8e4d-463c-a5da-b210ae7c19fd" containerName="copy" Feb 03 11:14:51 crc kubenswrapper[5010]: E0203 11:14:51.107432 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14281e11-e2f8-462e-91e3-ad1c46fa575f" containerName="registry-server" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.107482 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="14281e11-e2f8-462e-91e3-ad1c46fa575f" containerName="registry-server" Feb 03 11:14:51 crc kubenswrapper[5010]: E0203 11:14:51.107572 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a60388dd-8e4d-463c-a5da-b210ae7c19fd" containerName="gather" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.107627 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a60388dd-8e4d-463c-a5da-b210ae7c19fd" containerName="gather" Feb 03 11:14:51 crc kubenswrapper[5010]: E0203 11:14:51.107687 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14281e11-e2f8-462e-91e3-ad1c46fa575f" containerName="extract-content" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.107742 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="14281e11-e2f8-462e-91e3-ad1c46fa575f" containerName="extract-content" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.108004 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="a60388dd-8e4d-463c-a5da-b210ae7c19fd" containerName="copy" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.108074 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="a60388dd-8e4d-463c-a5da-b210ae7c19fd" containerName="gather" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.108147 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="14281e11-e2f8-462e-91e3-ad1c46fa575f" containerName="registry-server" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.109403 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/must-gather-xf96m" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.114895 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mcw6z"/"openshift-service-ca.crt" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.115147 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mcw6z"/"kube-root-ca.crt" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.121778 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-mcw6z"/"default-dockercfg-qc58k" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.143892 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mcw6z/must-gather-xf96m"] Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.197350 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lc2c\" (UniqueName: \"kubernetes.io/projected/9734985d-a674-4c92-b03c-7ca708780de2-kube-api-access-7lc2c\") pod \"must-gather-xf96m\" (UID: \"9734985d-a674-4c92-b03c-7ca708780de2\") " pod="openshift-must-gather-mcw6z/must-gather-xf96m" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.197460 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9734985d-a674-4c92-b03c-7ca708780de2-must-gather-output\") pod \"must-gather-xf96m\" (UID: \"9734985d-a674-4c92-b03c-7ca708780de2\") " pod="openshift-must-gather-mcw6z/must-gather-xf96m" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.299982 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lc2c\" (UniqueName: \"kubernetes.io/projected/9734985d-a674-4c92-b03c-7ca708780de2-kube-api-access-7lc2c\") pod \"must-gather-xf96m\" (UID: \"9734985d-a674-4c92-b03c-7ca708780de2\") " pod="openshift-must-gather-mcw6z/must-gather-xf96m" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.300067 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9734985d-a674-4c92-b03c-7ca708780de2-must-gather-output\") pod \"must-gather-xf96m\" (UID: \"9734985d-a674-4c92-b03c-7ca708780de2\") " pod="openshift-must-gather-mcw6z/must-gather-xf96m" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.300724 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9734985d-a674-4c92-b03c-7ca708780de2-must-gather-output\") pod \"must-gather-xf96m\" (UID: \"9734985d-a674-4c92-b03c-7ca708780de2\") " pod="openshift-must-gather-mcw6z/must-gather-xf96m" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.891556 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lc2c\" (UniqueName: \"kubernetes.io/projected/9734985d-a674-4c92-b03c-7ca708780de2-kube-api-access-7lc2c\") pod \"must-gather-xf96m\" (UID: \"9734985d-a674-4c92-b03c-7ca708780de2\") " pod="openshift-must-gather-mcw6z/must-gather-xf96m" Feb 03 11:14:52 crc kubenswrapper[5010]: I0203 11:14:52.032081 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/must-gather-xf96m" Feb 03 11:14:52 crc kubenswrapper[5010]: I0203 11:14:52.523633 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mcw6z/must-gather-xf96m"] Feb 03 11:14:53 crc kubenswrapper[5010]: I0203 11:14:53.097446 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mcw6z/must-gather-xf96m" event={"ID":"9734985d-a674-4c92-b03c-7ca708780de2","Type":"ContainerStarted","Data":"1bb6ed59c0b4992b1aaa8c727fe9862558803252bbff9dc2431ce922cbca729c"} Feb 03 11:14:53 crc kubenswrapper[5010]: I0203 11:14:53.097962 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mcw6z/must-gather-xf96m" event={"ID":"9734985d-a674-4c92-b03c-7ca708780de2","Type":"ContainerStarted","Data":"05ab0abbc9679831aee8cf150363b170113cefe84ec90a83a731ed49cebad061"} Feb 03 11:14:54 crc kubenswrapper[5010]: I0203 11:14:54.111784 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mcw6z/must-gather-xf96m" event={"ID":"9734985d-a674-4c92-b03c-7ca708780de2","Type":"ContainerStarted","Data":"10474f5f43472032315addbe669cd60be39554b99965e76916b96cb1a8a1f7cb"} Feb 03 11:14:54 crc kubenswrapper[5010]: I0203 11:14:54.133805 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mcw6z/must-gather-xf96m" podStartSLOduration=3.133776087 podStartE2EDuration="3.133776087s" podCreationTimestamp="2026-02-03 11:14:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 11:14:54.130631709 +0000 UTC m=+4364.286607838" watchObservedRunningTime="2026-02-03 11:14:54.133776087 +0000 UTC m=+4364.289752216" Feb 03 11:14:56 crc kubenswrapper[5010]: I0203 11:14:56.882741 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mcw6z/crc-debug-svtxv"] Feb 03 11:14:56 crc kubenswrapper[5010]: I0203 11:14:56.884942 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/crc-debug-svtxv" Feb 03 11:14:56 crc kubenswrapper[5010]: I0203 11:14:56.954084 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/44a2f827-854b-449a-84ef-1056dd3f6551-host\") pod \"crc-debug-svtxv\" (UID: \"44a2f827-854b-449a-84ef-1056dd3f6551\") " pod="openshift-must-gather-mcw6z/crc-debug-svtxv" Feb 03 11:14:56 crc kubenswrapper[5010]: I0203 11:14:56.954302 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt8js\" (UniqueName: \"kubernetes.io/projected/44a2f827-854b-449a-84ef-1056dd3f6551-kube-api-access-rt8js\") pod \"crc-debug-svtxv\" (UID: \"44a2f827-854b-449a-84ef-1056dd3f6551\") " pod="openshift-must-gather-mcw6z/crc-debug-svtxv" Feb 03 11:14:57 crc kubenswrapper[5010]: I0203 11:14:57.056804 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt8js\" (UniqueName: \"kubernetes.io/projected/44a2f827-854b-449a-84ef-1056dd3f6551-kube-api-access-rt8js\") pod \"crc-debug-svtxv\" (UID: \"44a2f827-854b-449a-84ef-1056dd3f6551\") " pod="openshift-must-gather-mcw6z/crc-debug-svtxv" Feb 03 11:14:57 crc kubenswrapper[5010]: I0203 11:14:57.056882 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/44a2f827-854b-449a-84ef-1056dd3f6551-host\") pod \"crc-debug-svtxv\" (UID: \"44a2f827-854b-449a-84ef-1056dd3f6551\") " pod="openshift-must-gather-mcw6z/crc-debug-svtxv" Feb 03 11:14:57 crc kubenswrapper[5010]: I0203 11:14:57.057042 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/44a2f827-854b-449a-84ef-1056dd3f6551-host\") pod \"crc-debug-svtxv\" (UID: \"44a2f827-854b-449a-84ef-1056dd3f6551\") " pod="openshift-must-gather-mcw6z/crc-debug-svtxv" Feb 03 11:14:57 crc kubenswrapper[5010]: I0203 11:14:57.078467 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt8js\" (UniqueName: \"kubernetes.io/projected/44a2f827-854b-449a-84ef-1056dd3f6551-kube-api-access-rt8js\") pod \"crc-debug-svtxv\" (UID: \"44a2f827-854b-449a-84ef-1056dd3f6551\") " pod="openshift-must-gather-mcw6z/crc-debug-svtxv" Feb 03 11:14:57 crc kubenswrapper[5010]: I0203 11:14:57.208166 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/crc-debug-svtxv" Feb 03 11:14:57 crc kubenswrapper[5010]: W0203 11:14:57.249195 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44a2f827_854b_449a_84ef_1056dd3f6551.slice/crio-dfda6f0c403cfd8f1cc1440fcba1f368a2177ea3efe616ff02d08908a9af0a0e WatchSource:0}: Error finding container dfda6f0c403cfd8f1cc1440fcba1f368a2177ea3efe616ff02d08908a9af0a0e: Status 404 returned error can't find the container with id dfda6f0c403cfd8f1cc1440fcba1f368a2177ea3efe616ff02d08908a9af0a0e Feb 03 11:14:58 crc kubenswrapper[5010]: I0203 11:14:58.173736 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mcw6z/crc-debug-svtxv" event={"ID":"44a2f827-854b-449a-84ef-1056dd3f6551","Type":"ContainerStarted","Data":"da5a6743ef56c67276b9a41831c4be7bccdaf47755f96146ee789a456925019b"} Feb 03 11:14:58 crc kubenswrapper[5010]: I0203 11:14:58.174434 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mcw6z/crc-debug-svtxv" event={"ID":"44a2f827-854b-449a-84ef-1056dd3f6551","Type":"ContainerStarted","Data":"dfda6f0c403cfd8f1cc1440fcba1f368a2177ea3efe616ff02d08908a9af0a0e"} Feb 03 11:14:58 crc kubenswrapper[5010]: I0203 11:14:58.215669 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mcw6z/crc-debug-svtxv" podStartSLOduration=2.215646629 podStartE2EDuration="2.215646629s" podCreationTimestamp="2026-02-03 11:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 11:14:58.209152608 +0000 UTC m=+4368.365128737" watchObservedRunningTime="2026-02-03 11:14:58.215646629 +0000 UTC m=+4368.371622758" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.192149 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7"] Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.194615 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.197972 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.201284 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.206448 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7"] Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.327248 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70655f23-d08e-4b01-85a3-abe91c302928-config-volume\") pod \"collect-profiles-29501955-mz8d7\" (UID: \"70655f23-d08e-4b01-85a3-abe91c302928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.327421 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt6cr\" (UniqueName: \"kubernetes.io/projected/70655f23-d08e-4b01-85a3-abe91c302928-kube-api-access-tt6cr\") pod \"collect-profiles-29501955-mz8d7\" (UID: \"70655f23-d08e-4b01-85a3-abe91c302928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.327549 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70655f23-d08e-4b01-85a3-abe91c302928-secret-volume\") pod \"collect-profiles-29501955-mz8d7\" (UID: \"70655f23-d08e-4b01-85a3-abe91c302928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.429716 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70655f23-d08e-4b01-85a3-abe91c302928-config-volume\") pod \"collect-profiles-29501955-mz8d7\" (UID: \"70655f23-d08e-4b01-85a3-abe91c302928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.429838 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt6cr\" (UniqueName: \"kubernetes.io/projected/70655f23-d08e-4b01-85a3-abe91c302928-kube-api-access-tt6cr\") pod \"collect-profiles-29501955-mz8d7\" (UID: \"70655f23-d08e-4b01-85a3-abe91c302928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.429909 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70655f23-d08e-4b01-85a3-abe91c302928-secret-volume\") pod \"collect-profiles-29501955-mz8d7\" (UID: \"70655f23-d08e-4b01-85a3-abe91c302928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.432639 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70655f23-d08e-4b01-85a3-abe91c302928-config-volume\") pod \"collect-profiles-29501955-mz8d7\" (UID: \"70655f23-d08e-4b01-85a3-abe91c302928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.438205 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70655f23-d08e-4b01-85a3-abe91c302928-secret-volume\") pod \"collect-profiles-29501955-mz8d7\" (UID: \"70655f23-d08e-4b01-85a3-abe91c302928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.448960 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt6cr\" (UniqueName: \"kubernetes.io/projected/70655f23-d08e-4b01-85a3-abe91c302928-kube-api-access-tt6cr\") pod \"collect-profiles-29501955-mz8d7\" (UID: \"70655f23-d08e-4b01-85a3-abe91c302928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.509635 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:15:00 crc kubenswrapper[5010]: E0203 11:15:00.510375 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.524085 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:01 crc kubenswrapper[5010]: I0203 11:15:01.090691 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7"] Feb 03 11:15:01 crc kubenswrapper[5010]: I0203 11:15:01.212321 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" event={"ID":"70655f23-d08e-4b01-85a3-abe91c302928","Type":"ContainerStarted","Data":"2433ad62acce4157c35bbc328622aef6febcc5b182871e799aabbdc9fd47fa60"} Feb 03 11:15:02 crc kubenswrapper[5010]: I0203 11:15:02.226368 5010 generic.go:334] "Generic (PLEG): container finished" podID="70655f23-d08e-4b01-85a3-abe91c302928" containerID="683870c4ba048ecd94c07fb0d8aef48237fc85bc962ccb5cab622d562e3c45dd" exitCode=0 Feb 03 11:15:02 crc kubenswrapper[5010]: I0203 11:15:02.226488 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" event={"ID":"70655f23-d08e-4b01-85a3-abe91c302928","Type":"ContainerDied","Data":"683870c4ba048ecd94c07fb0d8aef48237fc85bc962ccb5cab622d562e3c45dd"} Feb 03 11:15:03 crc kubenswrapper[5010]: I0203 11:15:03.648167 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:03 crc kubenswrapper[5010]: I0203 11:15:03.804646 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70655f23-d08e-4b01-85a3-abe91c302928-config-volume\") pod \"70655f23-d08e-4b01-85a3-abe91c302928\" (UID: \"70655f23-d08e-4b01-85a3-abe91c302928\") " Feb 03 11:15:03 crc kubenswrapper[5010]: I0203 11:15:03.804758 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt6cr\" (UniqueName: \"kubernetes.io/projected/70655f23-d08e-4b01-85a3-abe91c302928-kube-api-access-tt6cr\") pod \"70655f23-d08e-4b01-85a3-abe91c302928\" (UID: \"70655f23-d08e-4b01-85a3-abe91c302928\") " Feb 03 11:15:03 crc kubenswrapper[5010]: I0203 11:15:03.804803 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70655f23-d08e-4b01-85a3-abe91c302928-secret-volume\") pod \"70655f23-d08e-4b01-85a3-abe91c302928\" (UID: \"70655f23-d08e-4b01-85a3-abe91c302928\") " Feb 03 11:15:03 crc kubenswrapper[5010]: I0203 11:15:03.806649 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70655f23-d08e-4b01-85a3-abe91c302928-config-volume" (OuterVolumeSpecName: "config-volume") pod "70655f23-d08e-4b01-85a3-abe91c302928" (UID: "70655f23-d08e-4b01-85a3-abe91c302928"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 11:15:03 crc kubenswrapper[5010]: I0203 11:15:03.806959 5010 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70655f23-d08e-4b01-85a3-abe91c302928-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 11:15:04 crc kubenswrapper[5010]: I0203 11:15:04.252829 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" event={"ID":"70655f23-d08e-4b01-85a3-abe91c302928","Type":"ContainerDied","Data":"2433ad62acce4157c35bbc328622aef6febcc5b182871e799aabbdc9fd47fa60"} Feb 03 11:15:04 crc kubenswrapper[5010]: I0203 11:15:04.253309 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2433ad62acce4157c35bbc328622aef6febcc5b182871e799aabbdc9fd47fa60" Feb 03 11:15:04 crc kubenswrapper[5010]: I0203 11:15:04.252893 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:04 crc kubenswrapper[5010]: I0203 11:15:04.385608 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70655f23-d08e-4b01-85a3-abe91c302928-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "70655f23-d08e-4b01-85a3-abe91c302928" (UID: "70655f23-d08e-4b01-85a3-abe91c302928"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 11:15:04 crc kubenswrapper[5010]: I0203 11:15:04.389558 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70655f23-d08e-4b01-85a3-abe91c302928-kube-api-access-tt6cr" (OuterVolumeSpecName: "kube-api-access-tt6cr") pod "70655f23-d08e-4b01-85a3-abe91c302928" (UID: "70655f23-d08e-4b01-85a3-abe91c302928"). InnerVolumeSpecName "kube-api-access-tt6cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:15:04 crc kubenswrapper[5010]: I0203 11:15:04.436861 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt6cr\" (UniqueName: \"kubernetes.io/projected/70655f23-d08e-4b01-85a3-abe91c302928-kube-api-access-tt6cr\") on node \"crc\" DevicePath \"\"" Feb 03 11:15:04 crc kubenswrapper[5010]: I0203 11:15:04.436927 5010 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70655f23-d08e-4b01-85a3-abe91c302928-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 11:15:04 crc kubenswrapper[5010]: I0203 11:15:04.776195 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb"] Feb 03 11:15:04 crc kubenswrapper[5010]: I0203 11:15:04.784581 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb"] Feb 03 11:15:06 crc kubenswrapper[5010]: I0203 11:15:06.515153 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34e554f0-be79-4c9c-974d-f25941ae930e" path="/var/lib/kubelet/pods/34e554f0-be79-4c9c-974d-f25941ae930e/volumes" Feb 03 11:15:13 crc kubenswrapper[5010]: I0203 11:15:13.503636 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:15:13 crc kubenswrapper[5010]: E0203 11:15:13.504833 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:15:23 crc kubenswrapper[5010]: I0203 11:15:23.198396 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-7594db59b7-8cg94" podUID="a0d01af0-abb7-4cd1-92d7-d741182948f9" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 03 11:15:24 crc kubenswrapper[5010]: I0203 11:15:24.308450 5010 scope.go:117] "RemoveContainer" containerID="50c1d73139063edd3d9e95aeb676f19fdb661e56cb93f7dad0c5a0ed756233ca" Feb 03 11:15:27 crc kubenswrapper[5010]: I0203 11:15:27.502391 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:15:27 crc kubenswrapper[5010]: E0203 11:15:27.503674 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:15:36 crc kubenswrapper[5010]: I0203 11:15:36.664002 5010 generic.go:334] "Generic (PLEG): container finished" podID="44a2f827-854b-449a-84ef-1056dd3f6551" containerID="da5a6743ef56c67276b9a41831c4be7bccdaf47755f96146ee789a456925019b" exitCode=0 Feb 03 11:15:36 crc kubenswrapper[5010]: I0203 11:15:36.664100 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mcw6z/crc-debug-svtxv" event={"ID":"44a2f827-854b-449a-84ef-1056dd3f6551","Type":"ContainerDied","Data":"da5a6743ef56c67276b9a41831c4be7bccdaf47755f96146ee789a456925019b"} Feb 03 11:15:37 crc kubenswrapper[5010]: I0203 11:15:37.810084 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/crc-debug-svtxv" Feb 03 11:15:37 crc kubenswrapper[5010]: I0203 11:15:37.867564 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mcw6z/crc-debug-svtxv"] Feb 03 11:15:37 crc kubenswrapper[5010]: I0203 11:15:37.877996 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mcw6z/crc-debug-svtxv"] Feb 03 11:15:37 crc kubenswrapper[5010]: I0203 11:15:37.955902 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/44a2f827-854b-449a-84ef-1056dd3f6551-host\") pod \"44a2f827-854b-449a-84ef-1056dd3f6551\" (UID: \"44a2f827-854b-449a-84ef-1056dd3f6551\") " Feb 03 11:15:37 crc kubenswrapper[5010]: I0203 11:15:37.956041 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44a2f827-854b-449a-84ef-1056dd3f6551-host" (OuterVolumeSpecName: "host") pod "44a2f827-854b-449a-84ef-1056dd3f6551" (UID: "44a2f827-854b-449a-84ef-1056dd3f6551"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 11:15:37 crc kubenswrapper[5010]: I0203 11:15:37.956128 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt8js\" (UniqueName: \"kubernetes.io/projected/44a2f827-854b-449a-84ef-1056dd3f6551-kube-api-access-rt8js\") pod \"44a2f827-854b-449a-84ef-1056dd3f6551\" (UID: \"44a2f827-854b-449a-84ef-1056dd3f6551\") " Feb 03 11:15:37 crc kubenswrapper[5010]: I0203 11:15:37.956588 5010 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/44a2f827-854b-449a-84ef-1056dd3f6551-host\") on node \"crc\" DevicePath \"\"" Feb 03 11:15:37 crc kubenswrapper[5010]: I0203 11:15:37.964288 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44a2f827-854b-449a-84ef-1056dd3f6551-kube-api-access-rt8js" (OuterVolumeSpecName: "kube-api-access-rt8js") pod "44a2f827-854b-449a-84ef-1056dd3f6551" (UID: "44a2f827-854b-449a-84ef-1056dd3f6551"). InnerVolumeSpecName "kube-api-access-rt8js". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:15:38 crc kubenswrapper[5010]: I0203 11:15:38.059452 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt8js\" (UniqueName: \"kubernetes.io/projected/44a2f827-854b-449a-84ef-1056dd3f6551-kube-api-access-rt8js\") on node \"crc\" DevicePath \"\"" Feb 03 11:15:38 crc kubenswrapper[5010]: I0203 11:15:38.503354 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:15:38 crc kubenswrapper[5010]: E0203 11:15:38.503699 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:15:38 crc kubenswrapper[5010]: I0203 11:15:38.514835 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44a2f827-854b-449a-84ef-1056dd3f6551" path="/var/lib/kubelet/pods/44a2f827-854b-449a-84ef-1056dd3f6551/volumes" Feb 03 11:15:38 crc kubenswrapper[5010]: I0203 11:15:38.685461 5010 scope.go:117] "RemoveContainer" containerID="da5a6743ef56c67276b9a41831c4be7bccdaf47755f96146ee789a456925019b" Feb 03 11:15:38 crc kubenswrapper[5010]: I0203 11:15:38.685584 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/crc-debug-svtxv" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.091814 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mcw6z/crc-debug-xfzbj"] Feb 03 11:15:39 crc kubenswrapper[5010]: E0203 11:15:39.092880 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70655f23-d08e-4b01-85a3-abe91c302928" containerName="collect-profiles" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.092901 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="70655f23-d08e-4b01-85a3-abe91c302928" containerName="collect-profiles" Feb 03 11:15:39 crc kubenswrapper[5010]: E0203 11:15:39.092922 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44a2f827-854b-449a-84ef-1056dd3f6551" containerName="container-00" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.092929 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="44a2f827-854b-449a-84ef-1056dd3f6551" containerName="container-00" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.093144 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="70655f23-d08e-4b01-85a3-abe91c302928" containerName="collect-profiles" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.093175 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="44a2f827-854b-449a-84ef-1056dd3f6551" containerName="container-00" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.094068 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/crc-debug-xfzbj" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.185980 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d8kk\" (UniqueName: \"kubernetes.io/projected/1e8c915b-848b-484b-9bea-d9b01737deb8-kube-api-access-9d8kk\") pod \"crc-debug-xfzbj\" (UID: \"1e8c915b-848b-484b-9bea-d9b01737deb8\") " pod="openshift-must-gather-mcw6z/crc-debug-xfzbj" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.186277 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e8c915b-848b-484b-9bea-d9b01737deb8-host\") pod \"crc-debug-xfzbj\" (UID: \"1e8c915b-848b-484b-9bea-d9b01737deb8\") " pod="openshift-must-gather-mcw6z/crc-debug-xfzbj" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.289349 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d8kk\" (UniqueName: \"kubernetes.io/projected/1e8c915b-848b-484b-9bea-d9b01737deb8-kube-api-access-9d8kk\") pod \"crc-debug-xfzbj\" (UID: \"1e8c915b-848b-484b-9bea-d9b01737deb8\") " pod="openshift-must-gather-mcw6z/crc-debug-xfzbj" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.289442 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e8c915b-848b-484b-9bea-d9b01737deb8-host\") pod \"crc-debug-xfzbj\" (UID: \"1e8c915b-848b-484b-9bea-d9b01737deb8\") " pod="openshift-must-gather-mcw6z/crc-debug-xfzbj" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.289653 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e8c915b-848b-484b-9bea-d9b01737deb8-host\") pod \"crc-debug-xfzbj\" (UID: \"1e8c915b-848b-484b-9bea-d9b01737deb8\") " pod="openshift-must-gather-mcw6z/crc-debug-xfzbj" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.326193 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d8kk\" (UniqueName: \"kubernetes.io/projected/1e8c915b-848b-484b-9bea-d9b01737deb8-kube-api-access-9d8kk\") pod \"crc-debug-xfzbj\" (UID: \"1e8c915b-848b-484b-9bea-d9b01737deb8\") " pod="openshift-must-gather-mcw6z/crc-debug-xfzbj" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.419668 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/crc-debug-xfzbj" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.702553 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mcw6z/crc-debug-xfzbj" event={"ID":"1e8c915b-848b-484b-9bea-d9b01737deb8","Type":"ContainerStarted","Data":"63813946f52060743276adfc0a668470e564cbc2eb88f2cce410e37b7f6b53fc"} Feb 03 11:15:40 crc kubenswrapper[5010]: I0203 11:15:40.714638 5010 generic.go:334] "Generic (PLEG): container finished" podID="1e8c915b-848b-484b-9bea-d9b01737deb8" containerID="51fcb5bf6651fadaf5858665eb6318be90bb636a234fbb36614ef116c2582598" exitCode=0 Feb 03 11:15:40 crc kubenswrapper[5010]: I0203 11:15:40.714817 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mcw6z/crc-debug-xfzbj" event={"ID":"1e8c915b-848b-484b-9bea-d9b01737deb8","Type":"ContainerDied","Data":"51fcb5bf6651fadaf5858665eb6318be90bb636a234fbb36614ef116c2582598"} Feb 03 11:15:41 crc kubenswrapper[5010]: I0203 11:15:41.207846 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mcw6z/crc-debug-xfzbj"] Feb 03 11:15:41 crc kubenswrapper[5010]: I0203 11:15:41.222972 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mcw6z/crc-debug-xfzbj"] Feb 03 11:15:41 crc kubenswrapper[5010]: I0203 11:15:41.857547 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/crc-debug-xfzbj" Feb 03 11:15:41 crc kubenswrapper[5010]: I0203 11:15:41.963965 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d8kk\" (UniqueName: \"kubernetes.io/projected/1e8c915b-848b-484b-9bea-d9b01737deb8-kube-api-access-9d8kk\") pod \"1e8c915b-848b-484b-9bea-d9b01737deb8\" (UID: \"1e8c915b-848b-484b-9bea-d9b01737deb8\") " Feb 03 11:15:41 crc kubenswrapper[5010]: I0203 11:15:41.964823 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e8c915b-848b-484b-9bea-d9b01737deb8-host\") pod \"1e8c915b-848b-484b-9bea-d9b01737deb8\" (UID: \"1e8c915b-848b-484b-9bea-d9b01737deb8\") " Feb 03 11:15:41 crc kubenswrapper[5010]: I0203 11:15:41.965591 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e8c915b-848b-484b-9bea-d9b01737deb8-host" (OuterVolumeSpecName: "host") pod "1e8c915b-848b-484b-9bea-d9b01737deb8" (UID: "1e8c915b-848b-484b-9bea-d9b01737deb8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 11:15:41 crc kubenswrapper[5010]: I0203 11:15:41.974531 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e8c915b-848b-484b-9bea-d9b01737deb8-kube-api-access-9d8kk" (OuterVolumeSpecName: "kube-api-access-9d8kk") pod "1e8c915b-848b-484b-9bea-d9b01737deb8" (UID: "1e8c915b-848b-484b-9bea-d9b01737deb8"). InnerVolumeSpecName "kube-api-access-9d8kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.067636 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d8kk\" (UniqueName: \"kubernetes.io/projected/1e8c915b-848b-484b-9bea-d9b01737deb8-kube-api-access-9d8kk\") on node \"crc\" DevicePath \"\"" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.067704 5010 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e8c915b-848b-484b-9bea-d9b01737deb8-host\") on node \"crc\" DevicePath \"\"" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.516680 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e8c915b-848b-484b-9bea-d9b01737deb8" path="/var/lib/kubelet/pods/1e8c915b-848b-484b-9bea-d9b01737deb8/volumes" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.543320 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mcw6z/crc-debug-k79rg"] Feb 03 11:15:42 crc kubenswrapper[5010]: E0203 11:15:42.543922 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e8c915b-848b-484b-9bea-d9b01737deb8" containerName="container-00" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.543948 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e8c915b-848b-484b-9bea-d9b01737deb8" containerName="container-00" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.544256 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e8c915b-848b-484b-9bea-d9b01737deb8" containerName="container-00" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.545473 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/crc-debug-k79rg" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.681084 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsgv9\" (UniqueName: \"kubernetes.io/projected/afa2e74e-076a-4f5b-acf8-eb116df93c94-kube-api-access-dsgv9\") pod \"crc-debug-k79rg\" (UID: \"afa2e74e-076a-4f5b-acf8-eb116df93c94\") " pod="openshift-must-gather-mcw6z/crc-debug-k79rg" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.681269 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/afa2e74e-076a-4f5b-acf8-eb116df93c94-host\") pod \"crc-debug-k79rg\" (UID: \"afa2e74e-076a-4f5b-acf8-eb116df93c94\") " pod="openshift-must-gather-mcw6z/crc-debug-k79rg" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.735422 5010 scope.go:117] "RemoveContainer" containerID="51fcb5bf6651fadaf5858665eb6318be90bb636a234fbb36614ef116c2582598" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.735459 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/crc-debug-xfzbj" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.783685 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/afa2e74e-076a-4f5b-acf8-eb116df93c94-host\") pod \"crc-debug-k79rg\" (UID: \"afa2e74e-076a-4f5b-acf8-eb116df93c94\") " pod="openshift-must-gather-mcw6z/crc-debug-k79rg" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.783864 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgv9\" (UniqueName: \"kubernetes.io/projected/afa2e74e-076a-4f5b-acf8-eb116df93c94-kube-api-access-dsgv9\") pod \"crc-debug-k79rg\" (UID: \"afa2e74e-076a-4f5b-acf8-eb116df93c94\") " pod="openshift-must-gather-mcw6z/crc-debug-k79rg" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.783864 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/afa2e74e-076a-4f5b-acf8-eb116df93c94-host\") pod \"crc-debug-k79rg\" (UID: \"afa2e74e-076a-4f5b-acf8-eb116df93c94\") " pod="openshift-must-gather-mcw6z/crc-debug-k79rg" Feb 03 11:15:43 crc kubenswrapper[5010]: I0203 11:15:43.084015 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgv9\" (UniqueName: \"kubernetes.io/projected/afa2e74e-076a-4f5b-acf8-eb116df93c94-kube-api-access-dsgv9\") pod \"crc-debug-k79rg\" (UID: \"afa2e74e-076a-4f5b-acf8-eb116df93c94\") " pod="openshift-must-gather-mcw6z/crc-debug-k79rg" Feb 03 11:15:43 crc kubenswrapper[5010]: I0203 11:15:43.167302 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/crc-debug-k79rg" Feb 03 11:15:43 crc kubenswrapper[5010]: W0203 11:15:43.207733 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafa2e74e_076a_4f5b_acf8_eb116df93c94.slice/crio-2c88ad76c0bf2b923d66e87f21333ac6fe94ecdca382cc7114120194b4200730 WatchSource:0}: Error finding container 2c88ad76c0bf2b923d66e87f21333ac6fe94ecdca382cc7114120194b4200730: Status 404 returned error can't find the container with id 2c88ad76c0bf2b923d66e87f21333ac6fe94ecdca382cc7114120194b4200730 Feb 03 11:15:43 crc kubenswrapper[5010]: I0203 11:15:43.746492 5010 generic.go:334] "Generic (PLEG): container finished" podID="afa2e74e-076a-4f5b-acf8-eb116df93c94" containerID="406e4918c67a9656dc6cdcdad3d111483dbc23ef9b81287c1855292c83442925" exitCode=0 Feb 03 11:15:43 crc kubenswrapper[5010]: I0203 11:15:43.746586 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mcw6z/crc-debug-k79rg" event={"ID":"afa2e74e-076a-4f5b-acf8-eb116df93c94","Type":"ContainerDied","Data":"406e4918c67a9656dc6cdcdad3d111483dbc23ef9b81287c1855292c83442925"} Feb 03 11:15:43 crc kubenswrapper[5010]: I0203 11:15:43.747105 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mcw6z/crc-debug-k79rg" event={"ID":"afa2e74e-076a-4f5b-acf8-eb116df93c94","Type":"ContainerStarted","Data":"2c88ad76c0bf2b923d66e87f21333ac6fe94ecdca382cc7114120194b4200730"} Feb 03 11:15:43 crc kubenswrapper[5010]: I0203 11:15:43.795237 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mcw6z/crc-debug-k79rg"] Feb 03 11:15:43 crc kubenswrapper[5010]: I0203 11:15:43.804503 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mcw6z/crc-debug-k79rg"] Feb 03 11:15:44 crc kubenswrapper[5010]: I0203 11:15:44.879258 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/crc-debug-k79rg" Feb 03 11:15:45 crc kubenswrapper[5010]: I0203 11:15:45.031350 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsgv9\" (UniqueName: \"kubernetes.io/projected/afa2e74e-076a-4f5b-acf8-eb116df93c94-kube-api-access-dsgv9\") pod \"afa2e74e-076a-4f5b-acf8-eb116df93c94\" (UID: \"afa2e74e-076a-4f5b-acf8-eb116df93c94\") " Feb 03 11:15:45 crc kubenswrapper[5010]: I0203 11:15:45.031461 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/afa2e74e-076a-4f5b-acf8-eb116df93c94-host\") pod \"afa2e74e-076a-4f5b-acf8-eb116df93c94\" (UID: \"afa2e74e-076a-4f5b-acf8-eb116df93c94\") " Feb 03 11:15:45 crc kubenswrapper[5010]: I0203 11:15:45.031969 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afa2e74e-076a-4f5b-acf8-eb116df93c94-host" (OuterVolumeSpecName: "host") pod "afa2e74e-076a-4f5b-acf8-eb116df93c94" (UID: "afa2e74e-076a-4f5b-acf8-eb116df93c94"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 11:15:45 crc kubenswrapper[5010]: I0203 11:15:45.032512 5010 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/afa2e74e-076a-4f5b-acf8-eb116df93c94-host\") on node \"crc\" DevicePath \"\"" Feb 03 11:15:45 crc kubenswrapper[5010]: I0203 11:15:45.683563 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afa2e74e-076a-4f5b-acf8-eb116df93c94-kube-api-access-dsgv9" (OuterVolumeSpecName: "kube-api-access-dsgv9") pod "afa2e74e-076a-4f5b-acf8-eb116df93c94" (UID: "afa2e74e-076a-4f5b-acf8-eb116df93c94"). InnerVolumeSpecName "kube-api-access-dsgv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:15:45 crc kubenswrapper[5010]: I0203 11:15:45.750488 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsgv9\" (UniqueName: \"kubernetes.io/projected/afa2e74e-076a-4f5b-acf8-eb116df93c94-kube-api-access-dsgv9\") on node \"crc\" DevicePath \"\"" Feb 03 11:15:45 crc kubenswrapper[5010]: I0203 11:15:45.771638 5010 scope.go:117] "RemoveContainer" containerID="406e4918c67a9656dc6cdcdad3d111483dbc23ef9b81287c1855292c83442925" Feb 03 11:15:45 crc kubenswrapper[5010]: I0203 11:15:45.771722 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/crc-debug-k79rg" Feb 03 11:15:46 crc kubenswrapper[5010]: I0203 11:15:46.516048 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afa2e74e-076a-4f5b-acf8-eb116df93c94" path="/var/lib/kubelet/pods/afa2e74e-076a-4f5b-acf8-eb116df93c94/volumes" Feb 03 11:15:51 crc kubenswrapper[5010]: I0203 11:15:51.503515 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:15:51 crc kubenswrapper[5010]: E0203 11:15:51.504645 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:16:06 crc kubenswrapper[5010]: I0203 11:16:06.503135 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:16:06 crc kubenswrapper[5010]: E0203 11:16:06.504579 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:16:20 crc kubenswrapper[5010]: I0203 11:16:20.508687 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:16:20 crc kubenswrapper[5010]: E0203 11:16:20.511056 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:16:21 crc kubenswrapper[5010]: I0203 11:16:21.800199 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w8svr"] Feb 03 11:16:21 crc kubenswrapper[5010]: E0203 11:16:21.802467 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afa2e74e-076a-4f5b-acf8-eb116df93c94" containerName="container-00" Feb 03 11:16:21 crc kubenswrapper[5010]: I0203 11:16:21.802589 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa2e74e-076a-4f5b-acf8-eb116df93c94" containerName="container-00" Feb 03 11:16:21 crc kubenswrapper[5010]: I0203 11:16:21.802915 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afa2e74e-076a-4f5b-acf8-eb116df93c94" containerName="container-00" Feb 03 11:16:21 crc kubenswrapper[5010]: I0203 11:16:21.805705 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:21 crc kubenswrapper[5010]: I0203 11:16:21.814841 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8svr"] Feb 03 11:16:21 crc kubenswrapper[5010]: I0203 11:16:21.957073 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-utilities\") pod \"redhat-marketplace-w8svr\" (UID: \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\") " pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:21 crc kubenswrapper[5010]: I0203 11:16:21.957523 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-catalog-content\") pod \"redhat-marketplace-w8svr\" (UID: \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\") " pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:21 crc kubenswrapper[5010]: I0203 11:16:21.957830 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz5xd\" (UniqueName: \"kubernetes.io/projected/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-kube-api-access-cz5xd\") pod \"redhat-marketplace-w8svr\" (UID: \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\") " pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:22 crc kubenswrapper[5010]: I0203 11:16:22.060314 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz5xd\" (UniqueName: \"kubernetes.io/projected/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-kube-api-access-cz5xd\") pod \"redhat-marketplace-w8svr\" (UID: \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\") " pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:22 crc kubenswrapper[5010]: I0203 11:16:22.060418 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-utilities\") pod \"redhat-marketplace-w8svr\" (UID: \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\") " pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:22 crc kubenswrapper[5010]: I0203 11:16:22.060459 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-catalog-content\") pod \"redhat-marketplace-w8svr\" (UID: \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\") " pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:22 crc kubenswrapper[5010]: I0203 11:16:22.061372 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-catalog-content\") pod \"redhat-marketplace-w8svr\" (UID: \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\") " pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:22 crc kubenswrapper[5010]: I0203 11:16:22.061734 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-utilities\") pod \"redhat-marketplace-w8svr\" (UID: \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\") " pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:22 crc kubenswrapper[5010]: I0203 11:16:22.087560 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz5xd\" (UniqueName: \"kubernetes.io/projected/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-kube-api-access-cz5xd\") pod \"redhat-marketplace-w8svr\" (UID: \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\") " pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:22 crc kubenswrapper[5010]: I0203 11:16:22.145474 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:22 crc kubenswrapper[5010]: I0203 11:16:22.716883 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8svr"] Feb 03 11:16:23 crc kubenswrapper[5010]: I0203 11:16:23.250674 5010 generic.go:334] "Generic (PLEG): container finished" podID="8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" containerID="64748690ace80dc376f2cdc62838e4d8d9449a8a1101e3d0a945d61fc654c51a" exitCode=0 Feb 03 11:16:23 crc kubenswrapper[5010]: I0203 11:16:23.250716 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8svr" event={"ID":"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774","Type":"ContainerDied","Data":"64748690ace80dc376f2cdc62838e4d8d9449a8a1101e3d0a945d61fc654c51a"} Feb 03 11:16:23 crc kubenswrapper[5010]: I0203 11:16:23.250900 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8svr" event={"ID":"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774","Type":"ContainerStarted","Data":"a625a88c3772c4a6e67478d73e58636fba9dd936e9e8c89dbafdce51c27cd0d3"} Feb 03 11:16:24 crc kubenswrapper[5010]: I0203 11:16:24.601929 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6f67746f54-2l6b9_3bab826b-af5f-4bd1-a68a-0bdda5f89d80/barbican-api/0.log" Feb 03 11:16:24 crc kubenswrapper[5010]: I0203 11:16:24.843786 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6f67746f54-2l6b9_3bab826b-af5f-4bd1-a68a-0bdda5f89d80/barbican-api-log/0.log" Feb 03 11:16:24 crc kubenswrapper[5010]: I0203 11:16:24.936404 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-85855ff49d-76x8k_f377630f-64f3-4fd9-8449-53d739d775c2/barbican-keystone-listener-log/0.log" Feb 03 11:16:24 crc kubenswrapper[5010]: I0203 11:16:24.959669 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-85855ff49d-76x8k_f377630f-64f3-4fd9-8449-53d739d775c2/barbican-keystone-listener/0.log" Feb 03 11:16:25 crc kubenswrapper[5010]: I0203 11:16:25.271537 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8svr" event={"ID":"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774","Type":"ContainerStarted","Data":"6e7d114f087a9f8bbe826a9b9ddb87ea49927051ff280e3c70635c184504fca5"} Feb 03 11:16:25 crc kubenswrapper[5010]: I0203 11:16:25.800091 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6bdd746887-zr9j6_4cb276c1-b6b3-45ef-84be-8bae1d46d9d7/barbican-worker/0.log" Feb 03 11:16:25 crc kubenswrapper[5010]: I0203 11:16:25.835191 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6bdd746887-zr9j6_4cb276c1-b6b3-45ef-84be-8bae1d46d9d7/barbican-worker-log/0.log" Feb 03 11:16:25 crc kubenswrapper[5010]: I0203 11:16:25.903326 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf_2d389772-7902-4aca-8bc3-03a0708fbaa2/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:26 crc kubenswrapper[5010]: I0203 11:16:26.083061 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fe58e747-c39e-4370-93bc-f72f8c5ee95a/ceilometer-central-agent/0.log" Feb 03 11:16:26 crc kubenswrapper[5010]: I0203 11:16:26.147607 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fe58e747-c39e-4370-93bc-f72f8c5ee95a/ceilometer-notification-agent/0.log" Feb 03 11:16:26 crc kubenswrapper[5010]: I0203 11:16:26.190274 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fe58e747-c39e-4370-93bc-f72f8c5ee95a/proxy-httpd/0.log" Feb 03 11:16:26 crc kubenswrapper[5010]: I0203 11:16:26.285441 5010 generic.go:334] "Generic (PLEG): container finished" podID="8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" containerID="6e7d114f087a9f8bbe826a9b9ddb87ea49927051ff280e3c70635c184504fca5" exitCode=0 Feb 03 11:16:26 crc kubenswrapper[5010]: I0203 11:16:26.285502 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8svr" event={"ID":"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774","Type":"ContainerDied","Data":"6e7d114f087a9f8bbe826a9b9ddb87ea49927051ff280e3c70635c184504fca5"} Feb 03 11:16:26 crc kubenswrapper[5010]: I0203 11:16:26.296900 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fe58e747-c39e-4370-93bc-f72f8c5ee95a/sg-core/0.log" Feb 03 11:16:26 crc kubenswrapper[5010]: I0203 11:16:26.447226 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7e079d37-86a2-4be8-a16b-821095c780f0/cinder-api-log/0.log" Feb 03 11:16:26 crc kubenswrapper[5010]: I0203 11:16:26.449399 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7e079d37-86a2-4be8-a16b-821095c780f0/cinder-api/0.log" Feb 03 11:16:26 crc kubenswrapper[5010]: I0203 11:16:26.669342 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_63ed8c2d-6ac3-4a61-8e4c-1601efeca708/probe/0.log" Feb 03 11:16:26 crc kubenswrapper[5010]: I0203 11:16:26.716002 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_63ed8c2d-6ac3-4a61-8e4c-1601efeca708/cinder-scheduler/0.log" Feb 03 11:16:26 crc kubenswrapper[5010]: I0203 11:16:26.802592 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-5tffc_efb76028-3500-476c-adef-dfc87d2cdab7/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:26 crc kubenswrapper[5010]: I0203 11:16:26.943121 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-ktk67_f4e7c571-ff51-496f-81b8-2fee3f357d3f/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:27 crc kubenswrapper[5010]: I0203 11:16:27.052785 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-845df_3d935acc-a244-4c1f-a9f8-9924fa8b61f1/init/0.log" Feb 03 11:16:27 crc kubenswrapper[5010]: I0203 11:16:27.297688 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-845df_3d935acc-a244-4c1f-a9f8-9924fa8b61f1/init/0.log" Feb 03 11:16:27 crc kubenswrapper[5010]: I0203 11:16:27.298394 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8svr" event={"ID":"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774","Type":"ContainerStarted","Data":"9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e"} Feb 03 11:16:27 crc kubenswrapper[5010]: I0203 11:16:27.379078 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w8svr" podStartSLOduration=2.649709281 podStartE2EDuration="6.379052465s" podCreationTimestamp="2026-02-03 11:16:21 +0000 UTC" firstStartedPulling="2026-02-03 11:16:23.253043231 +0000 UTC m=+4453.409019360" lastFinishedPulling="2026-02-03 11:16:26.982386415 +0000 UTC m=+4457.138362544" observedRunningTime="2026-02-03 11:16:27.330748068 +0000 UTC m=+4457.486724207" watchObservedRunningTime="2026-02-03 11:16:27.379052465 +0000 UTC m=+4457.535028594" Feb 03 11:16:27 crc kubenswrapper[5010]: I0203 11:16:27.401074 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-845df_3d935acc-a244-4c1f-a9f8-9924fa8b61f1/dnsmasq-dns/0.log" Feb 03 11:16:27 crc kubenswrapper[5010]: I0203 11:16:27.424070 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs_96722ef6-9c22-4700-8163-b25503d014bd/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:27 crc kubenswrapper[5010]: I0203 11:16:27.625755 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1769cccf-496c-4370-8e08-e1f156fecd77/glance-log/0.log" Feb 03 11:16:27 crc kubenswrapper[5010]: I0203 11:16:27.688876 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1769cccf-496c-4370-8e08-e1f156fecd77/glance-httpd/0.log" Feb 03 11:16:27 crc kubenswrapper[5010]: I0203 11:16:27.959416 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a/glance-httpd/0.log" Feb 03 11:16:27 crc kubenswrapper[5010]: I0203 11:16:27.960183 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a/glance-log/0.log" Feb 03 11:16:28 crc kubenswrapper[5010]: I0203 11:16:28.166462 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cc988db4-2mpfb_2fedcc57-b16c-4177-a10e-f627269b4adb/horizon/1.log" Feb 03 11:16:28 crc kubenswrapper[5010]: I0203 11:16:28.309118 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-msc5t_af6128d5-2369-4ef9-99aa-61ad0bf3b213/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:28 crc kubenswrapper[5010]: I0203 11:16:28.391907 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cc988db4-2mpfb_2fedcc57-b16c-4177-a10e-f627269b4adb/horizon/0.log" Feb 03 11:16:28 crc kubenswrapper[5010]: I0203 11:16:28.661706 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cc988db4-2mpfb_2fedcc57-b16c-4177-a10e-f627269b4adb/horizon-log/0.log" Feb 03 11:16:28 crc kubenswrapper[5010]: I0203 11:16:28.695306 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-hz8vx_49056616-86cd-41cd-a102-1072dc2a79f4/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:28 crc kubenswrapper[5010]: I0203 11:16:28.973360 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29501941-gv4sr_96c330a2-14f4-4923-8707-6b9cce98267f/keystone-cron/0.log" Feb 03 11:16:29 crc kubenswrapper[5010]: I0203 11:16:29.008643 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-675cc696d4-7wvtv_8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4/keystone-api/0.log" Feb 03 11:16:29 crc kubenswrapper[5010]: I0203 11:16:29.174409 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_de374df0-0b73-4be2-9719-d4b471782ed4/kube-state-metrics/0.log" Feb 03 11:16:29 crc kubenswrapper[5010]: I0203 11:16:29.270028 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d_5b7ff70c-1251-4fd5-a71c-bf6703bcc85d/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:29 crc kubenswrapper[5010]: I0203 11:16:29.690864 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-78c78c7889-r9575_158ac65e-849e-4f85-a4b6-1ac4bde1a1ec/neutron-httpd/0.log" Feb 03 11:16:29 crc kubenswrapper[5010]: I0203 11:16:29.739115 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-78c78c7889-r9575_158ac65e-849e-4f85-a4b6-1ac4bde1a1ec/neutron-api/0.log" Feb 03 11:16:29 crc kubenswrapper[5010]: I0203 11:16:29.823422 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p_4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:30 crc kubenswrapper[5010]: I0203 11:16:30.324654 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_aba2689d-cd13-4601-ac45-69409c411839/nova-api-log/0.log" Feb 03 11:16:30 crc kubenswrapper[5010]: I0203 11:16:30.415228 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_26dec936-0343-4d5f-8f2b-cf2a797786b5/nova-cell0-conductor-conductor/0.log" Feb 03 11:16:30 crc kubenswrapper[5010]: I0203 11:16:30.868309 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_291a9878-85fe-4988-8a7d-1da10ac49b23/nova-cell1-conductor-conductor/0.log" Feb 03 11:16:30 crc kubenswrapper[5010]: I0203 11:16:30.879160 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_aba2689d-cd13-4601-ac45-69409c411839/nova-api-api/0.log" Feb 03 11:16:30 crc kubenswrapper[5010]: I0203 11:16:30.884712 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c9bd4788-ae5f-49c4-8116-04076a16f4f1/nova-cell1-novncproxy-novncproxy/0.log" Feb 03 11:16:31 crc kubenswrapper[5010]: I0203 11:16:31.123672 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-bq7n5_6fd37dcf-e81a-491a-a5e1-01a27517d1b4/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:31 crc kubenswrapper[5010]: I0203 11:16:31.328475 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_edaaf3a7-a254-4a29-875a-643e46308f33/nova-metadata-log/0.log" Feb 03 11:16:31 crc kubenswrapper[5010]: I0203 11:16:31.638112 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_28559aae-4731-4653-a466-8c6f5c6c7dcf/nova-scheduler-scheduler/0.log" Feb 03 11:16:31 crc kubenswrapper[5010]: I0203 11:16:31.663392 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_87eb5dd8-7171-457a-8a95-eda98893319a/mysql-bootstrap/0.log" Feb 03 11:16:31 crc kubenswrapper[5010]: I0203 11:16:31.850454 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_87eb5dd8-7171-457a-8a95-eda98893319a/mysql-bootstrap/0.log" Feb 03 11:16:31 crc kubenswrapper[5010]: I0203 11:16:31.961586 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_87eb5dd8-7171-457a-8a95-eda98893319a/galera/0.log" Feb 03 11:16:32 crc kubenswrapper[5010]: I0203 11:16:32.108040 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_449f0b91-9186-4a16-b1b4-7f199b57a428/mysql-bootstrap/0.log" Feb 03 11:16:32 crc kubenswrapper[5010]: I0203 11:16:32.145601 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:32 crc kubenswrapper[5010]: I0203 11:16:32.145654 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:32 crc kubenswrapper[5010]: I0203 11:16:32.206104 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:32 crc kubenswrapper[5010]: I0203 11:16:32.357444 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_449f0b91-9186-4a16-b1b4-7f199b57a428/galera/0.log" Feb 03 11:16:32 crc kubenswrapper[5010]: I0203 11:16:32.362822 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_449f0b91-9186-4a16-b1b4-7f199b57a428/mysql-bootstrap/0.log" Feb 03 11:16:32 crc kubenswrapper[5010]: I0203 11:16:32.410280 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:32 crc kubenswrapper[5010]: I0203 11:16:32.495857 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8svr"] Feb 03 11:16:32 crc kubenswrapper[5010]: I0203 11:16:32.615317 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c80632c0-72bc-461d-8e87-591d0ddbc1a8/openstackclient/0.log" Feb 03 11:16:32 crc kubenswrapper[5010]: I0203 11:16:32.735046 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vqkq5_5235b9fc-3723-4d8a-9851-e8ee89c0b084/openstack-network-exporter/0.log" Feb 03 11:16:32 crc kubenswrapper[5010]: I0203 11:16:32.932988 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-krnr5_b2780eb3-7b7a-47fe-bda0-2605419df774/ovsdb-server-init/0.log" Feb 03 11:16:32 crc kubenswrapper[5010]: I0203 11:16:32.954509 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_edaaf3a7-a254-4a29-875a-643e46308f33/nova-metadata-metadata/0.log" Feb 03 11:16:33 crc kubenswrapper[5010]: I0203 11:16:33.459021 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-krnr5_b2780eb3-7b7a-47fe-bda0-2605419df774/ovsdb-server/0.log" Feb 03 11:16:33 crc kubenswrapper[5010]: I0203 11:16:33.502712 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:16:33 crc kubenswrapper[5010]: E0203 11:16:33.503040 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:16:33 crc kubenswrapper[5010]: I0203 11:16:33.522490 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-krnr5_b2780eb3-7b7a-47fe-bda0-2605419df774/ovsdb-server-init/0.log" Feb 03 11:16:33 crc kubenswrapper[5010]: I0203 11:16:33.525141 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-krnr5_b2780eb3-7b7a-47fe-bda0-2605419df774/ovs-vswitchd/0.log" Feb 03 11:16:33 crc kubenswrapper[5010]: I0203 11:16:33.699517 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ql6ht_1883c30e-4c38-468d-a5dc-91b07f167d67/ovn-controller/0.log" Feb 03 11:16:34 crc kubenswrapper[5010]: I0203 11:16:34.370002 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w8svr" podUID="8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" containerName="registry-server" containerID="cri-o://9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e" gracePeriod=2 Feb 03 11:16:34 crc kubenswrapper[5010]: I0203 11:16:34.589244 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5158e153-9918-4fce-8f2f-75a87b96562b/openstack-network-exporter/0.log" Feb 03 11:16:34 crc kubenswrapper[5010]: I0203 11:16:34.625151 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-js9ms_a3aac34b-fb9e-4853-9a1d-c311dc75f055/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:34 crc kubenswrapper[5010]: I0203 11:16:34.869013 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5158e153-9918-4fce-8f2f-75a87b96562b/ovn-northd/0.log" Feb 03 11:16:34 crc kubenswrapper[5010]: I0203 11:16:34.936206 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6d6abf1f-9905-4f96-8d44-d7ef3f9f299d/openstack-network-exporter/0.log" Feb 03 11:16:34 crc kubenswrapper[5010]: I0203 11:16:34.949655 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.022630 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6d6abf1f-9905-4f96-8d44-d7ef3f9f299d/ovsdbserver-nb/0.log" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.048724 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-catalog-content\") pod \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\" (UID: \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\") " Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.048924 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-utilities\") pod \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\" (UID: \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\") " Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.048967 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz5xd\" (UniqueName: \"kubernetes.io/projected/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-kube-api-access-cz5xd\") pod \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\" (UID: \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\") " Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.049778 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-utilities" (OuterVolumeSpecName: "utilities") pod "8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" (UID: "8a446ddb-d2f5-4eaf-8be0-2d051c4e6774"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.056527 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-kube-api-access-cz5xd" (OuterVolumeSpecName: "kube-api-access-cz5xd") pod "8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" (UID: "8a446ddb-d2f5-4eaf-8be0-2d051c4e6774"). InnerVolumeSpecName "kube-api-access-cz5xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.075032 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" (UID: "8a446ddb-d2f5-4eaf-8be0-2d051c4e6774"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.151520 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.151568 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.151585 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz5xd\" (UniqueName: \"kubernetes.io/projected/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-kube-api-access-cz5xd\") on node \"crc\" DevicePath \"\"" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.195467 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6dfa0a64-db8a-457a-8eff-f27ffa8e02ce/openstack-network-exporter/0.log" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.284420 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6dfa0a64-db8a-457a-8eff-f27ffa8e02ce/ovsdbserver-sb/0.log" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.380704 5010 generic.go:334] "Generic (PLEG): container finished" podID="8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" containerID="9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e" exitCode=0 Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.380776 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8svr" event={"ID":"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774","Type":"ContainerDied","Data":"9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e"} Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.380808 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.380825 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8svr" event={"ID":"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774","Type":"ContainerDied","Data":"a625a88c3772c4a6e67478d73e58636fba9dd936e9e8c89dbafdce51c27cd0d3"} Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.380866 5010 scope.go:117] "RemoveContainer" containerID="9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.417448 5010 scope.go:117] "RemoveContainer" containerID="6e7d114f087a9f8bbe826a9b9ddb87ea49927051ff280e3c70635c184504fca5" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.431662 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8svr"] Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.441824 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8svr"] Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.443754 5010 scope.go:117] "RemoveContainer" containerID="64748690ace80dc376f2cdc62838e4d8d9449a8a1101e3d0a945d61fc654c51a" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.503098 5010 scope.go:117] "RemoveContainer" containerID="9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e" Feb 03 11:16:35 crc kubenswrapper[5010]: E0203 11:16:35.503535 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e\": container with ID starting with 9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e not found: ID does not exist" containerID="9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.503593 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e"} err="failed to get container status \"9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e\": rpc error: code = NotFound desc = could not find container \"9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e\": container with ID starting with 9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e not found: ID does not exist" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.503662 5010 scope.go:117] "RemoveContainer" containerID="6e7d114f087a9f8bbe826a9b9ddb87ea49927051ff280e3c70635c184504fca5" Feb 03 11:16:35 crc kubenswrapper[5010]: E0203 11:16:35.503946 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e7d114f087a9f8bbe826a9b9ddb87ea49927051ff280e3c70635c184504fca5\": container with ID starting with 6e7d114f087a9f8bbe826a9b9ddb87ea49927051ff280e3c70635c184504fca5 not found: ID does not exist" containerID="6e7d114f087a9f8bbe826a9b9ddb87ea49927051ff280e3c70635c184504fca5" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.503975 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e7d114f087a9f8bbe826a9b9ddb87ea49927051ff280e3c70635c184504fca5"} err="failed to get container status \"6e7d114f087a9f8bbe826a9b9ddb87ea49927051ff280e3c70635c184504fca5\": rpc error: code = NotFound desc = could not find container \"6e7d114f087a9f8bbe826a9b9ddb87ea49927051ff280e3c70635c184504fca5\": container with ID starting with 6e7d114f087a9f8bbe826a9b9ddb87ea49927051ff280e3c70635c184504fca5 not found: ID does not exist" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.503993 5010 scope.go:117] "RemoveContainer" containerID="64748690ace80dc376f2cdc62838e4d8d9449a8a1101e3d0a945d61fc654c51a" Feb 03 11:16:35 crc kubenswrapper[5010]: E0203 11:16:35.504201 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64748690ace80dc376f2cdc62838e4d8d9449a8a1101e3d0a945d61fc654c51a\": container with ID starting with 64748690ace80dc376f2cdc62838e4d8d9449a8a1101e3d0a945d61fc654c51a not found: ID does not exist" containerID="64748690ace80dc376f2cdc62838e4d8d9449a8a1101e3d0a945d61fc654c51a" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.504274 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64748690ace80dc376f2cdc62838e4d8d9449a8a1101e3d0a945d61fc654c51a"} err="failed to get container status \"64748690ace80dc376f2cdc62838e4d8d9449a8a1101e3d0a945d61fc654c51a\": rpc error: code = NotFound desc = could not find container \"64748690ace80dc376f2cdc62838e4d8d9449a8a1101e3d0a945d61fc654c51a\": container with ID starting with 64748690ace80dc376f2cdc62838e4d8d9449a8a1101e3d0a945d61fc654c51a not found: ID does not exist" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.757988 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-bc6c5cf68-f9b4p_3ecd94c1-1faa-4acd-aa24-dd54388d2d99/placement-api/0.log" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.765382 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-bc6c5cf68-f9b4p_3ecd94c1-1faa-4acd-aa24-dd54388d2d99/placement-log/0.log" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.777869 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf/setup-container/0.log" Feb 03 11:16:36 crc kubenswrapper[5010]: I0203 11:16:36.001042 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf/rabbitmq/0.log" Feb 03 11:16:36 crc kubenswrapper[5010]: I0203 11:16:36.008390 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf/setup-container/0.log" Feb 03 11:16:36 crc kubenswrapper[5010]: I0203 11:16:36.102057 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_543f315d-d2f8-497f-a2c1-1a929c1611be/setup-container/0.log" Feb 03 11:16:36 crc kubenswrapper[5010]: I0203 11:16:36.308958 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_543f315d-d2f8-497f-a2c1-1a929c1611be/setup-container/0.log" Feb 03 11:16:36 crc kubenswrapper[5010]: I0203 11:16:36.364231 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_543f315d-d2f8-497f-a2c1-1a929c1611be/rabbitmq/0.log" Feb 03 11:16:36 crc kubenswrapper[5010]: I0203 11:16:36.373419 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt_d4357ef1-04ea-4dbd-acd8-70f34a5a72a1/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:36 crc kubenswrapper[5010]: I0203 11:16:36.516582 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" path="/var/lib/kubelet/pods/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774/volumes" Feb 03 11:16:37 crc kubenswrapper[5010]: I0203 11:16:37.205979 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-r8zqk_36d3f978-a301-44e6-a401-72e94c9f70ad/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:37 crc kubenswrapper[5010]: I0203 11:16:37.247910 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-mg749_43ecdc43-d866-4902-89cb-0ce68e89fe05/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:37 crc kubenswrapper[5010]: I0203 11:16:37.517951 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-nm955_a9fa7d27-81da-4dcd-adef-cb22c35d2641/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:37 crc kubenswrapper[5010]: I0203 11:16:37.561034 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-pfhx5_67a7675c-9074-4390-85ab-2bba845b2dc0/ssh-known-hosts-edpm-deployment/0.log" Feb 03 11:16:37 crc kubenswrapper[5010]: I0203 11:16:37.870226 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7594db59b7-8cg94_a0d01af0-abb7-4cd1-92d7-d741182948f9/proxy-server/0.log" Feb 03 11:16:37 crc kubenswrapper[5010]: I0203 11:16:37.993197 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-n8qtn_65c9ffaf-83e3-47c1-a1e8-b097b371ccec/swift-ring-rebalance/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.018692 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7594db59b7-8cg94_a0d01af0-abb7-4cd1-92d7-d741182948f9/proxy-httpd/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.180954 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/account-auditor/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.302517 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/account-reaper/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.397189 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/account-server/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.399411 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/account-replicator/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.412744 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/container-auditor/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.579093 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/container-replicator/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.627647 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/container-server/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.643238 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/container-updater/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.668300 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/object-auditor/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.907360 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/object-server/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.908990 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/object-updater/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.913930 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/object-expirer/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.935393 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/object-replicator/0.log" Feb 03 11:16:39 crc kubenswrapper[5010]: I0203 11:16:39.095743 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/rsync/0.log" Feb 03 11:16:39 crc kubenswrapper[5010]: I0203 11:16:39.100348 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/swift-recon-cron/0.log" Feb 03 11:16:39 crc kubenswrapper[5010]: I0203 11:16:39.259838 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h_7353ead1-b7ae-446c-a262-5a383b1d7e52/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:39 crc kubenswrapper[5010]: I0203 11:16:39.453901 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_8c8d92ab-5652-4bd9-81af-fd0be7aea36f/tempest-tests-tempest-tests-runner/0.log" Feb 03 11:16:39 crc kubenswrapper[5010]: I0203 11:16:39.495808 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_8dfa1254-0d2c-4885-a531-fc90541692e7/test-operator-logs-container/0.log" Feb 03 11:16:39 crc kubenswrapper[5010]: I0203 11:16:39.715687 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7_3109739d-69b7-439a-b6c4-a8affbe0af4f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:45 crc kubenswrapper[5010]: I0203 11:16:45.501798 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:16:45 crc kubenswrapper[5010]: E0203 11:16:45.502636 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:16:49 crc kubenswrapper[5010]: I0203 11:16:49.170508 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_95adc2d1-1093-484e-8580-53e244b420c8/memcached/0.log" Feb 03 11:16:56 crc kubenswrapper[5010]: I0203 11:16:56.502877 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:16:56 crc kubenswrapper[5010]: E0203 11:16:56.504155 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:17:09 crc kubenswrapper[5010]: I0203 11:17:09.502972 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:17:09 crc kubenswrapper[5010]: E0203 11:17:09.506586 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:17:10 crc kubenswrapper[5010]: I0203 11:17:10.118549 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/util/0.log" Feb 03 11:17:10 crc kubenswrapper[5010]: I0203 11:17:10.322282 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/util/0.log" Feb 03 11:17:10 crc kubenswrapper[5010]: I0203 11:17:10.344729 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/pull/0.log" Feb 03 11:17:10 crc kubenswrapper[5010]: I0203 11:17:10.364427 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/pull/0.log" Feb 03 11:17:10 crc kubenswrapper[5010]: I0203 11:17:10.522581 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/pull/0.log" Feb 03 11:17:10 crc kubenswrapper[5010]: I0203 11:17:10.541438 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/extract/0.log" Feb 03 11:17:10 crc kubenswrapper[5010]: I0203 11:17:10.552072 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/util/0.log" Feb 03 11:17:11 crc kubenswrapper[5010]: I0203 11:17:11.486377 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-52g72_a7d72ea1-7126-4768-9cf8-f590ebd216d7/manager/0.log" Feb 03 11:17:11 crc kubenswrapper[5010]: I0203 11:17:11.504873 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-jvb56_74803e29-48a3-4667-bcdb-a94f381545b5/manager/0.log" Feb 03 11:17:11 crc kubenswrapper[5010]: I0203 11:17:11.699634 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-j87lc_fd413d86-2cda-4079-a895-5cb60928a47f/manager/0.log" Feb 03 11:17:11 crc kubenswrapper[5010]: I0203 11:17:11.813469 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-gnxws_9fa8a872-8dc5-4e6d-838a-5dc54e6d4bbe/manager/0.log" Feb 03 11:17:11 crc kubenswrapper[5010]: I0203 11:17:11.928687 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-7szqs_d33dc0fd-847b-41cc-a8ac-afde40120ba2/manager/0.log" Feb 03 11:17:12 crc kubenswrapper[5010]: I0203 11:17:12.043179 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-k765q_9dc494bd-d6ef-4a22-8312-67750ebb3dbe/manager/0.log" Feb 03 11:17:12 crc kubenswrapper[5010]: I0203 11:17:12.244049 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-w7ldz_2f204595-5d98-4c16-b5d1-5004c6cae836/manager/0.log" Feb 03 11:17:12 crc kubenswrapper[5010]: I0203 11:17:12.339583 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-vlmtm_5fafda3f-e0cd-4477-9c10-442af83a835b/manager/0.log" Feb 03 11:17:12 crc kubenswrapper[5010]: I0203 11:17:12.527615 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-qrkwl_7f20ca5f-d244-45be-864d-3b8ad3d456ea/manager/0.log" Feb 03 11:17:12 crc kubenswrapper[5010]: I0203 11:17:12.565149 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-gb8tp_1a136ea1-ab68-4f60-8fb2-969363f25337/manager/0.log" Feb 03 11:17:12 crc kubenswrapper[5010]: I0203 11:17:12.768700 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-5zbbw_42f76062-3a9d-45c1-b928-d9ca236ec8ab/manager/0.log" Feb 03 11:17:12 crc kubenswrapper[5010]: I0203 11:17:12.877518 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-pwdks_4f112d60-8db7-4ec2-a82d-c7627ade05a3/manager/0.log" Feb 03 11:17:13 crc kubenswrapper[5010]: I0203 11:17:13.112377 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-t47jc_21f46dec-fb01-4293-ad08-706eb63a8738/manager/0.log" Feb 03 11:17:13 crc kubenswrapper[5010]: I0203 11:17:13.117251 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-5lzr6_27ab6ab7-e411-466c-bc4a-97d1660c547e/manager/0.log" Feb 03 11:17:13 crc kubenswrapper[5010]: I0203 11:17:13.305147 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs_76bde002-75f6-4c4a-af3d-16aec5a221f4/manager/0.log" Feb 03 11:17:13 crc kubenswrapper[5010]: I0203 11:17:13.464009 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-578f994c6c-72ld2_bde44bc9-c06a-4c2b-aad8-6f3247272024/operator/0.log" Feb 03 11:17:13 crc kubenswrapper[5010]: I0203 11:17:13.659692 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-fv5km_1e93c0a0-5a7b-40d7-aaee-e31455baf139/registry-server/0.log" Feb 03 11:17:13 crc kubenswrapper[5010]: I0203 11:17:13.913169 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-g8qz8_3e47047f-9303-47e2-8312-c83315e1a3ff/manager/0.log" Feb 03 11:17:13 crc kubenswrapper[5010]: I0203 11:17:13.945718 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-d99mj_8251c193-3c53-4651-87da-8b216cf907aa/manager/0.log" Feb 03 11:17:14 crc kubenswrapper[5010]: I0203 11:17:14.133583 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-kj7mj_2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a/operator/0.log" Feb 03 11:17:14 crc kubenswrapper[5010]: I0203 11:17:14.241843 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-mrvfq_84af1f21-c29e-4846-9ce1-ea345cbad4fc/manager/0.log" Feb 03 11:17:14 crc kubenswrapper[5010]: I0203 11:17:14.479260 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-pgwx2_a62d6669-692b-4909-b192-4348ac82a50d/manager/0.log" Feb 03 11:17:14 crc kubenswrapper[5010]: I0203 11:17:14.497122 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-ck5g7_e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58/manager/0.log" Feb 03 11:17:14 crc kubenswrapper[5010]: I0203 11:17:14.672026 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-844f879456-5ktjc_54aaeb1d-8a23-413f-b1f4-5115b167d78b/manager/0.log" Feb 03 11:17:14 crc kubenswrapper[5010]: I0203 11:17:14.744595 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-ftqqr_37a4f3fa-bbaf-433d-9835-6ac576351651/manager/0.log" Feb 03 11:17:24 crc kubenswrapper[5010]: I0203 11:17:24.502405 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:17:24 crc kubenswrapper[5010]: E0203 11:17:24.503417 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:17:35 crc kubenswrapper[5010]: I0203 11:17:35.503360 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:17:35 crc kubenswrapper[5010]: E0203 11:17:35.506053 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:17:37 crc kubenswrapper[5010]: I0203 11:17:37.411386 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xcpwg_ba766e4c-056f-4be6-a4b9-05592b641f87/control-plane-machine-set-operator/0.log" Feb 03 11:17:37 crc kubenswrapper[5010]: I0203 11:17:37.721332 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5mq4r_dc73dc6e-53ff-48b8-932e-d5aeb839f2dd/kube-rbac-proxy/0.log" Feb 03 11:17:37 crc kubenswrapper[5010]: I0203 11:17:37.744105 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5mq4r_dc73dc6e-53ff-48b8-932e-d5aeb839f2dd/machine-api-operator/0.log" Feb 03 11:17:47 crc kubenswrapper[5010]: I0203 11:17:47.503153 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:17:47 crc kubenswrapper[5010]: E0203 11:17:47.504508 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:17:52 crc kubenswrapper[5010]: I0203 11:17:52.594312 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-wtwpn_7746ae6f-d9a0-4bba-a7bc-4920ed478ff4/cert-manager-controller/0.log" Feb 03 11:17:52 crc kubenswrapper[5010]: I0203 11:17:52.778902 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-b5ngd_b9d02d93-3df5-4e4a-99b3-07329087dc2c/cert-manager-cainjector/0.log" Feb 03 11:17:52 crc kubenswrapper[5010]: I0203 11:17:52.867396 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-bfc2c_26bf0193-c1b8-4018-a7e4-4429a4292dfb/cert-manager-webhook/0.log" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.337631 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-89hxw"] Feb 03 11:17:55 crc kubenswrapper[5010]: E0203 11:17:55.340291 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" containerName="extract-utilities" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.340350 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" containerName="extract-utilities" Feb 03 11:17:55 crc kubenswrapper[5010]: E0203 11:17:55.340364 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" containerName="registry-server" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.340373 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" containerName="registry-server" Feb 03 11:17:55 crc kubenswrapper[5010]: E0203 11:17:55.340401 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" containerName="extract-content" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.340406 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" containerName="extract-content" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.340641 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" containerName="registry-server" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.344761 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.377423 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-89hxw"] Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.437160 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5plbq\" (UniqueName: \"kubernetes.io/projected/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-kube-api-access-5plbq\") pod \"certified-operators-89hxw\" (UID: \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\") " pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.437279 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-utilities\") pod \"certified-operators-89hxw\" (UID: \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\") " pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.437345 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-catalog-content\") pod \"certified-operators-89hxw\" (UID: \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\") " pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.539467 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-catalog-content\") pod \"certified-operators-89hxw\" (UID: \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\") " pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.539652 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5plbq\" (UniqueName: \"kubernetes.io/projected/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-kube-api-access-5plbq\") pod \"certified-operators-89hxw\" (UID: \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\") " pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.539757 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-utilities\") pod \"certified-operators-89hxw\" (UID: \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\") " pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.540093 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-catalog-content\") pod \"certified-operators-89hxw\" (UID: \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\") " pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.540282 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-utilities\") pod \"certified-operators-89hxw\" (UID: \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\") " pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.580462 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5plbq\" (UniqueName: \"kubernetes.io/projected/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-kube-api-access-5plbq\") pod \"certified-operators-89hxw\" (UID: \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\") " pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.684168 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:17:56 crc kubenswrapper[5010]: I0203 11:17:56.210184 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-89hxw"] Feb 03 11:17:56 crc kubenswrapper[5010]: I0203 11:17:56.265611 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89hxw" event={"ID":"d167930a-e7f9-4572-b3f5-050ef9b2ba5b","Type":"ContainerStarted","Data":"fb6b55f00f377b2ed89fbe28d48708a548b28708983b2041366114f9dd31d5da"} Feb 03 11:17:57 crc kubenswrapper[5010]: I0203 11:17:57.277270 5010 generic.go:334] "Generic (PLEG): container finished" podID="d167930a-e7f9-4572-b3f5-050ef9b2ba5b" containerID="2310ea87c7a1ec4068ebcd6b6d595874523381b62a1774ab67e74c04cf81ae74" exitCode=0 Feb 03 11:17:57 crc kubenswrapper[5010]: I0203 11:17:57.277324 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89hxw" event={"ID":"d167930a-e7f9-4572-b3f5-050ef9b2ba5b","Type":"ContainerDied","Data":"2310ea87c7a1ec4068ebcd6b6d595874523381b62a1774ab67e74c04cf81ae74"} Feb 03 11:17:57 crc kubenswrapper[5010]: I0203 11:17:57.280632 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 11:17:59 crc kubenswrapper[5010]: I0203 11:17:59.299785 5010 generic.go:334] "Generic (PLEG): container finished" podID="d167930a-e7f9-4572-b3f5-050ef9b2ba5b" containerID="6afb764f8871d7cb1c5cc4aa2c30725d8fcbb88fff2cbc3ce63a8d9eb3489812" exitCode=0 Feb 03 11:17:59 crc kubenswrapper[5010]: I0203 11:17:59.299884 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89hxw" event={"ID":"d167930a-e7f9-4572-b3f5-050ef9b2ba5b","Type":"ContainerDied","Data":"6afb764f8871d7cb1c5cc4aa2c30725d8fcbb88fff2cbc3ce63a8d9eb3489812"} Feb 03 11:17:59 crc kubenswrapper[5010]: I0203 11:17:59.502499 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:17:59 crc kubenswrapper[5010]: E0203 11:17:59.502820 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:18:01 crc kubenswrapper[5010]: I0203 11:18:01.323992 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89hxw" event={"ID":"d167930a-e7f9-4572-b3f5-050ef9b2ba5b","Type":"ContainerStarted","Data":"05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16"} Feb 03 11:18:01 crc kubenswrapper[5010]: I0203 11:18:01.354457 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-89hxw" podStartSLOduration=3.862059736 podStartE2EDuration="6.354421747s" podCreationTimestamp="2026-02-03 11:17:55 +0000 UTC" firstStartedPulling="2026-02-03 11:17:57.280032474 +0000 UTC m=+4547.436008623" lastFinishedPulling="2026-02-03 11:17:59.772394505 +0000 UTC m=+4549.928370634" observedRunningTime="2026-02-03 11:18:01.34587933 +0000 UTC m=+4551.501855459" watchObservedRunningTime="2026-02-03 11:18:01.354421747 +0000 UTC m=+4551.510397876" Feb 03 11:18:05 crc kubenswrapper[5010]: I0203 11:18:05.684327 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:18:05 crc kubenswrapper[5010]: I0203 11:18:05.684780 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:18:05 crc kubenswrapper[5010]: I0203 11:18:05.732631 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:18:06 crc kubenswrapper[5010]: I0203 11:18:06.446002 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:18:06 crc kubenswrapper[5010]: I0203 11:18:06.513044 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-89hxw"] Feb 03 11:18:08 crc kubenswrapper[5010]: I0203 11:18:08.425180 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-89hxw" podUID="d167930a-e7f9-4572-b3f5-050ef9b2ba5b" containerName="registry-server" containerID="cri-o://05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16" gracePeriod=2 Feb 03 11:18:08 crc kubenswrapper[5010]: I0203 11:18:08.804568 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-npjjg_a09e0456-1529-4ece-9266-d02a283d6bd1/nmstate-console-plugin/0.log" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.010689 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.054928 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5plbq\" (UniqueName: \"kubernetes.io/projected/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-kube-api-access-5plbq\") pod \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\" (UID: \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\") " Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.055066 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-catalog-content\") pod \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\" (UID: \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\") " Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.055302 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-utilities\") pod \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\" (UID: \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\") " Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.056242 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-utilities" (OuterVolumeSpecName: "utilities") pod "d167930a-e7f9-4572-b3f5-050ef9b2ba5b" (UID: "d167930a-e7f9-4572-b3f5-050ef9b2ba5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.076661 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-kube-api-access-5plbq" (OuterVolumeSpecName: "kube-api-access-5plbq") pod "d167930a-e7f9-4572-b3f5-050ef9b2ba5b" (UID: "d167930a-e7f9-4572-b3f5-050ef9b2ba5b"). InnerVolumeSpecName "kube-api-access-5plbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.159566 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5plbq\" (UniqueName: \"kubernetes.io/projected/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-kube-api-access-5plbq\") on node \"crc\" DevicePath \"\"" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.159646 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.223398 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-55jg2_d47b696a-a1d0-4389-a099-7f375ab72f8c/nmstate-handler/0.log" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.346033 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hl7ls_552fa369-352c-4690-aa39-f0364021feae/kube-rbac-proxy/0.log" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.453665 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hl7ls_552fa369-352c-4690-aa39-f0364021feae/nmstate-metrics/0.log" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.460704 5010 generic.go:334] "Generic (PLEG): container finished" podID="d167930a-e7f9-4572-b3f5-050ef9b2ba5b" containerID="05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16" exitCode=0 Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.460790 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89hxw" event={"ID":"d167930a-e7f9-4572-b3f5-050ef9b2ba5b","Type":"ContainerDied","Data":"05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16"} Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.462543 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89hxw" event={"ID":"d167930a-e7f9-4572-b3f5-050ef9b2ba5b","Type":"ContainerDied","Data":"fb6b55f00f377b2ed89fbe28d48708a548b28708983b2041366114f9dd31d5da"} Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.462604 5010 scope.go:117] "RemoveContainer" containerID="05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.462711 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.523912 5010 scope.go:117] "RemoveContainer" containerID="6afb764f8871d7cb1c5cc4aa2c30725d8fcbb88fff2cbc3ce63a8d9eb3489812" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.605464 5010 scope.go:117] "RemoveContainer" containerID="2310ea87c7a1ec4068ebcd6b6d595874523381b62a1774ab67e74c04cf81ae74" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.662934 5010 scope.go:117] "RemoveContainer" containerID="05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16" Feb 03 11:18:09 crc kubenswrapper[5010]: E0203 11:18:09.666478 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16\": container with ID starting with 05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16 not found: ID does not exist" containerID="05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.666551 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16"} err="failed to get container status \"05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16\": rpc error: code = NotFound desc = could not find container \"05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16\": container with ID starting with 05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16 not found: ID does not exist" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.666596 5010 scope.go:117] "RemoveContainer" containerID="6afb764f8871d7cb1c5cc4aa2c30725d8fcbb88fff2cbc3ce63a8d9eb3489812" Feb 03 11:18:09 crc kubenswrapper[5010]: E0203 11:18:09.672977 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6afb764f8871d7cb1c5cc4aa2c30725d8fcbb88fff2cbc3ce63a8d9eb3489812\": container with ID starting with 6afb764f8871d7cb1c5cc4aa2c30725d8fcbb88fff2cbc3ce63a8d9eb3489812 not found: ID does not exist" containerID="6afb764f8871d7cb1c5cc4aa2c30725d8fcbb88fff2cbc3ce63a8d9eb3489812" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.673042 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6afb764f8871d7cb1c5cc4aa2c30725d8fcbb88fff2cbc3ce63a8d9eb3489812"} err="failed to get container status \"6afb764f8871d7cb1c5cc4aa2c30725d8fcbb88fff2cbc3ce63a8d9eb3489812\": rpc error: code = NotFound desc = could not find container \"6afb764f8871d7cb1c5cc4aa2c30725d8fcbb88fff2cbc3ce63a8d9eb3489812\": container with ID starting with 6afb764f8871d7cb1c5cc4aa2c30725d8fcbb88fff2cbc3ce63a8d9eb3489812 not found: ID does not exist" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.673078 5010 scope.go:117] "RemoveContainer" containerID="2310ea87c7a1ec4068ebcd6b6d595874523381b62a1774ab67e74c04cf81ae74" Feb 03 11:18:09 crc kubenswrapper[5010]: E0203 11:18:09.676611 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2310ea87c7a1ec4068ebcd6b6d595874523381b62a1774ab67e74c04cf81ae74\": container with ID starting with 2310ea87c7a1ec4068ebcd6b6d595874523381b62a1774ab67e74c04cf81ae74 not found: ID does not exist" containerID="2310ea87c7a1ec4068ebcd6b6d595874523381b62a1774ab67e74c04cf81ae74" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.676642 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2310ea87c7a1ec4068ebcd6b6d595874523381b62a1774ab67e74c04cf81ae74"} err="failed to get container status \"2310ea87c7a1ec4068ebcd6b6d595874523381b62a1774ab67e74c04cf81ae74\": rpc error: code = NotFound desc = could not find container \"2310ea87c7a1ec4068ebcd6b6d595874523381b62a1774ab67e74c04cf81ae74\": container with ID starting with 2310ea87c7a1ec4068ebcd6b6d595874523381b62a1774ab67e74c04cf81ae74 not found: ID does not exist" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.761957 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-frs8s_e5c85e5b-ab19-414d-97e6-767b9e01f731/nmstate-operator/0.log" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.843403 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-2xtg6_1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a/nmstate-webhook/0.log" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.867029 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d167930a-e7f9-4572-b3f5-050ef9b2ba5b" (UID: "d167930a-e7f9-4572-b3f5-050ef9b2ba5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.934962 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 11:18:10 crc kubenswrapper[5010]: I0203 11:18:10.097617 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-89hxw"] Feb 03 11:18:10 crc kubenswrapper[5010]: I0203 11:18:10.110688 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-89hxw"] Feb 03 11:18:10 crc kubenswrapper[5010]: I0203 11:18:10.518305 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d167930a-e7f9-4572-b3f5-050ef9b2ba5b" path="/var/lib/kubelet/pods/d167930a-e7f9-4572-b3f5-050ef9b2ba5b/volumes" Feb 03 11:18:10 crc kubenswrapper[5010]: I0203 11:18:10.520177 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:18:10 crc kubenswrapper[5010]: E0203 11:18:10.520594 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:18:21 crc kubenswrapper[5010]: I0203 11:18:21.503329 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:18:21 crc kubenswrapper[5010]: E0203 11:18:21.504273 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:18:24 crc kubenswrapper[5010]: I0203 11:18:24.498697 5010 scope.go:117] "RemoveContainer" containerID="3bd849a4e703cdb76aecc93972aa5f7990799fc9bee08fac17023aef5ff87483" Feb 03 11:18:24 crc kubenswrapper[5010]: I0203 11:18:24.519711 5010 scope.go:117] "RemoveContainer" containerID="2edd458b2cfaa2b6e29690d9b6dedd98ec6688b7df796df1d92ea15b8aa6954c" Feb 03 11:18:24 crc kubenswrapper[5010]: I0203 11:18:24.598099 5010 scope.go:117] "RemoveContainer" containerID="306bee7e759854f6a192fe0ffdf5df25e12e0a3028ac1c2be5e4c36d51b30a5f" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.482031 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x2rfb"] Feb 03 11:18:32 crc kubenswrapper[5010]: E0203 11:18:32.483060 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d167930a-e7f9-4572-b3f5-050ef9b2ba5b" containerName="registry-server" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.483082 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="d167930a-e7f9-4572-b3f5-050ef9b2ba5b" containerName="registry-server" Feb 03 11:18:32 crc kubenswrapper[5010]: E0203 11:18:32.483107 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d167930a-e7f9-4572-b3f5-050ef9b2ba5b" containerName="extract-content" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.483116 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="d167930a-e7f9-4572-b3f5-050ef9b2ba5b" containerName="extract-content" Feb 03 11:18:32 crc kubenswrapper[5010]: E0203 11:18:32.483128 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d167930a-e7f9-4572-b3f5-050ef9b2ba5b" containerName="extract-utilities" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.483138 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="d167930a-e7f9-4572-b3f5-050ef9b2ba5b" containerName="extract-utilities" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.483393 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="d167930a-e7f9-4572-b3f5-050ef9b2ba5b" containerName="registry-server" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.485086 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.496861 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x2rfb"] Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.568723 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9526d09e-786a-4d86-a688-e4afe9b32bfe-utilities\") pod \"community-operators-x2rfb\" (UID: \"9526d09e-786a-4d86-a688-e4afe9b32bfe\") " pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.568897 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrvl9\" (UniqueName: \"kubernetes.io/projected/9526d09e-786a-4d86-a688-e4afe9b32bfe-kube-api-access-hrvl9\") pod \"community-operators-x2rfb\" (UID: \"9526d09e-786a-4d86-a688-e4afe9b32bfe\") " pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.569038 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9526d09e-786a-4d86-a688-e4afe9b32bfe-catalog-content\") pod \"community-operators-x2rfb\" (UID: \"9526d09e-786a-4d86-a688-e4afe9b32bfe\") " pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.671683 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrvl9\" (UniqueName: \"kubernetes.io/projected/9526d09e-786a-4d86-a688-e4afe9b32bfe-kube-api-access-hrvl9\") pod \"community-operators-x2rfb\" (UID: \"9526d09e-786a-4d86-a688-e4afe9b32bfe\") " pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.671838 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9526d09e-786a-4d86-a688-e4afe9b32bfe-catalog-content\") pod \"community-operators-x2rfb\" (UID: \"9526d09e-786a-4d86-a688-e4afe9b32bfe\") " pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.671978 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9526d09e-786a-4d86-a688-e4afe9b32bfe-utilities\") pod \"community-operators-x2rfb\" (UID: \"9526d09e-786a-4d86-a688-e4afe9b32bfe\") " pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.672542 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9526d09e-786a-4d86-a688-e4afe9b32bfe-catalog-content\") pod \"community-operators-x2rfb\" (UID: \"9526d09e-786a-4d86-a688-e4afe9b32bfe\") " pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.672557 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9526d09e-786a-4d86-a688-e4afe9b32bfe-utilities\") pod \"community-operators-x2rfb\" (UID: \"9526d09e-786a-4d86-a688-e4afe9b32bfe\") " pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:33 crc kubenswrapper[5010]: I0203 11:18:33.382981 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrvl9\" (UniqueName: \"kubernetes.io/projected/9526d09e-786a-4d86-a688-e4afe9b32bfe-kube-api-access-hrvl9\") pod \"community-operators-x2rfb\" (UID: \"9526d09e-786a-4d86-a688-e4afe9b32bfe\") " pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:33 crc kubenswrapper[5010]: I0203 11:18:33.537728 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:34 crc kubenswrapper[5010]: I0203 11:18:34.260237 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x2rfb"] Feb 03 11:18:34 crc kubenswrapper[5010]: I0203 11:18:34.759411 5010 generic.go:334] "Generic (PLEG): container finished" podID="9526d09e-786a-4d86-a688-e4afe9b32bfe" containerID="95f76f669fdc4f4397ea034bc58d0d6c6368ea07265cb288d94cc6600da47f2d" exitCode=0 Feb 03 11:18:34 crc kubenswrapper[5010]: I0203 11:18:34.759625 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2rfb" event={"ID":"9526d09e-786a-4d86-a688-e4afe9b32bfe","Type":"ContainerDied","Data":"95f76f669fdc4f4397ea034bc58d0d6c6368ea07265cb288d94cc6600da47f2d"} Feb 03 11:18:34 crc kubenswrapper[5010]: I0203 11:18:34.759673 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2rfb" event={"ID":"9526d09e-786a-4d86-a688-e4afe9b32bfe","Type":"ContainerStarted","Data":"2bff1107a7587f0594df99e45b328a27a6bd5035f60166a6aa071a85d2d649db"} Feb 03 11:18:35 crc kubenswrapper[5010]: I0203 11:18:35.504301 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:18:35 crc kubenswrapper[5010]: E0203 11:18:35.505685 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:18:36 crc kubenswrapper[5010]: I0203 11:18:36.781168 5010 generic.go:334] "Generic (PLEG): container finished" podID="9526d09e-786a-4d86-a688-e4afe9b32bfe" containerID="709289f36bf47f2729d6ffdaf061b4224d332c9018fd9e342bad19397d4f1d1c" exitCode=0 Feb 03 11:18:36 crc kubenswrapper[5010]: I0203 11:18:36.781484 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2rfb" event={"ID":"9526d09e-786a-4d86-a688-e4afe9b32bfe","Type":"ContainerDied","Data":"709289f36bf47f2729d6ffdaf061b4224d332c9018fd9e342bad19397d4f1d1c"} Feb 03 11:18:37 crc kubenswrapper[5010]: I0203 11:18:37.809707 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2rfb" event={"ID":"9526d09e-786a-4d86-a688-e4afe9b32bfe","Type":"ContainerStarted","Data":"a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1"} Feb 03 11:18:37 crc kubenswrapper[5010]: I0203 11:18:37.835956 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x2rfb" podStartSLOduration=3.17349481 podStartE2EDuration="5.835925248s" podCreationTimestamp="2026-02-03 11:18:32 +0000 UTC" firstStartedPulling="2026-02-03 11:18:34.761603314 +0000 UTC m=+4584.917579443" lastFinishedPulling="2026-02-03 11:18:37.424033762 +0000 UTC m=+4587.580009881" observedRunningTime="2026-02-03 11:18:37.830200103 +0000 UTC m=+4587.986176242" watchObservedRunningTime="2026-02-03 11:18:37.835925248 +0000 UTC m=+4587.991901387" Feb 03 11:18:43 crc kubenswrapper[5010]: I0203 11:18:43.538265 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:43 crc kubenswrapper[5010]: I0203 11:18:43.538935 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:43 crc kubenswrapper[5010]: I0203 11:18:43.624339 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:43 crc kubenswrapper[5010]: I0203 11:18:43.930200 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:44 crc kubenswrapper[5010]: I0203 11:18:44.007681 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x2rfb"] Feb 03 11:18:44 crc kubenswrapper[5010]: I0203 11:18:44.951677 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lpqgh_19f856e9-2325-41eb-8ed3-4daff562e84a/kube-rbac-proxy/0.log" Feb 03 11:18:45 crc kubenswrapper[5010]: I0203 11:18:45.080759 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lpqgh_19f856e9-2325-41eb-8ed3-4daff562e84a/controller/0.log" Feb 03 11:18:45 crc kubenswrapper[5010]: I0203 11:18:45.206486 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-frr-files/0.log" Feb 03 11:18:45 crc kubenswrapper[5010]: I0203 11:18:45.418721 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-reloader/0.log" Feb 03 11:18:45 crc kubenswrapper[5010]: I0203 11:18:45.466548 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-reloader/0.log" Feb 03 11:18:45 crc kubenswrapper[5010]: I0203 11:18:45.466997 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-frr-files/0.log" Feb 03 11:18:45 crc kubenswrapper[5010]: I0203 11:18:45.475407 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-metrics/0.log" Feb 03 11:18:45 crc kubenswrapper[5010]: I0203 11:18:45.819636 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-reloader/0.log" Feb 03 11:18:45 crc kubenswrapper[5010]: I0203 11:18:45.839470 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-metrics/0.log" Feb 03 11:18:45 crc kubenswrapper[5010]: I0203 11:18:45.866899 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-frr-files/0.log" Feb 03 11:18:45 crc kubenswrapper[5010]: I0203 11:18:45.869268 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-metrics/0.log" Feb 03 11:18:45 crc kubenswrapper[5010]: I0203 11:18:45.886692 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x2rfb" podUID="9526d09e-786a-4d86-a688-e4afe9b32bfe" containerName="registry-server" containerID="cri-o://a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1" gracePeriod=2 Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.074835 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-frr-files/0.log" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.127014 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-reloader/0.log" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.182239 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/controller/0.log" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.205826 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-metrics/0.log" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.376094 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.462343 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/frr-metrics/0.log" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.484685 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/kube-rbac-proxy/0.log" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.494081 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9526d09e-786a-4d86-a688-e4afe9b32bfe-catalog-content\") pod \"9526d09e-786a-4d86-a688-e4afe9b32bfe\" (UID: \"9526d09e-786a-4d86-a688-e4afe9b32bfe\") " Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.494275 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrvl9\" (UniqueName: \"kubernetes.io/projected/9526d09e-786a-4d86-a688-e4afe9b32bfe-kube-api-access-hrvl9\") pod \"9526d09e-786a-4d86-a688-e4afe9b32bfe\" (UID: \"9526d09e-786a-4d86-a688-e4afe9b32bfe\") " Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.494353 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9526d09e-786a-4d86-a688-e4afe9b32bfe-utilities\") pod \"9526d09e-786a-4d86-a688-e4afe9b32bfe\" (UID: \"9526d09e-786a-4d86-a688-e4afe9b32bfe\") " Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.497787 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9526d09e-786a-4d86-a688-e4afe9b32bfe-utilities" (OuterVolumeSpecName: "utilities") pod "9526d09e-786a-4d86-a688-e4afe9b32bfe" (UID: "9526d09e-786a-4d86-a688-e4afe9b32bfe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.502411 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:18:46 crc kubenswrapper[5010]: E0203 11:18:46.503104 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.505478 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/kube-rbac-proxy-frr/0.log" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.599824 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9526d09e-786a-4d86-a688-e4afe9b32bfe-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.889643 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9526d09e-786a-4d86-a688-e4afe9b32bfe-kube-api-access-hrvl9" (OuterVolumeSpecName: "kube-api-access-hrvl9") pod "9526d09e-786a-4d86-a688-e4afe9b32bfe" (UID: "9526d09e-786a-4d86-a688-e4afe9b32bfe"). InnerVolumeSpecName "kube-api-access-hrvl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.907643 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrvl9\" (UniqueName: \"kubernetes.io/projected/9526d09e-786a-4d86-a688-e4afe9b32bfe-kube-api-access-hrvl9\") on node \"crc\" DevicePath \"\"" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.910722 5010 generic.go:334] "Generic (PLEG): container finished" podID="9526d09e-786a-4d86-a688-e4afe9b32bfe" containerID="a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1" exitCode=0 Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.910785 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2rfb" event={"ID":"9526d09e-786a-4d86-a688-e4afe9b32bfe","Type":"ContainerDied","Data":"a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1"} Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.910824 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2rfb" event={"ID":"9526d09e-786a-4d86-a688-e4afe9b32bfe","Type":"ContainerDied","Data":"2bff1107a7587f0594df99e45b328a27a6bd5035f60166a6aa071a85d2d649db"} Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.910849 5010 scope.go:117] "RemoveContainer" containerID="a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.911131 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.019167 5010 scope.go:117] "RemoveContainer" containerID="709289f36bf47f2729d6ffdaf061b4224d332c9018fd9e342bad19397d4f1d1c" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.098396 5010 scope.go:117] "RemoveContainer" containerID="95f76f669fdc4f4397ea034bc58d0d6c6368ea07265cb288d94cc6600da47f2d" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.144171 5010 scope.go:117] "RemoveContainer" containerID="a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1" Feb 03 11:18:47 crc kubenswrapper[5010]: E0203 11:18:47.157445 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1\": container with ID starting with a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1 not found: ID does not exist" containerID="a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.157505 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1"} err="failed to get container status \"a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1\": rpc error: code = NotFound desc = could not find container \"a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1\": container with ID starting with a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1 not found: ID does not exist" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.157541 5010 scope.go:117] "RemoveContainer" containerID="709289f36bf47f2729d6ffdaf061b4224d332c9018fd9e342bad19397d4f1d1c" Feb 03 11:18:47 crc kubenswrapper[5010]: E0203 11:18:47.158166 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"709289f36bf47f2729d6ffdaf061b4224d332c9018fd9e342bad19397d4f1d1c\": container with ID starting with 709289f36bf47f2729d6ffdaf061b4224d332c9018fd9e342bad19397d4f1d1c not found: ID does not exist" containerID="709289f36bf47f2729d6ffdaf061b4224d332c9018fd9e342bad19397d4f1d1c" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.158199 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"709289f36bf47f2729d6ffdaf061b4224d332c9018fd9e342bad19397d4f1d1c"} err="failed to get container status \"709289f36bf47f2729d6ffdaf061b4224d332c9018fd9e342bad19397d4f1d1c\": rpc error: code = NotFound desc = could not find container \"709289f36bf47f2729d6ffdaf061b4224d332c9018fd9e342bad19397d4f1d1c\": container with ID starting with 709289f36bf47f2729d6ffdaf061b4224d332c9018fd9e342bad19397d4f1d1c not found: ID does not exist" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.158228 5010 scope.go:117] "RemoveContainer" containerID="95f76f669fdc4f4397ea034bc58d0d6c6368ea07265cb288d94cc6600da47f2d" Feb 03 11:18:47 crc kubenswrapper[5010]: E0203 11:18:47.158859 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95f76f669fdc4f4397ea034bc58d0d6c6368ea07265cb288d94cc6600da47f2d\": container with ID starting with 95f76f669fdc4f4397ea034bc58d0d6c6368ea07265cb288d94cc6600da47f2d not found: ID does not exist" containerID="95f76f669fdc4f4397ea034bc58d0d6c6368ea07265cb288d94cc6600da47f2d" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.158894 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95f76f669fdc4f4397ea034bc58d0d6c6368ea07265cb288d94cc6600da47f2d"} err="failed to get container status \"95f76f669fdc4f4397ea034bc58d0d6c6368ea07265cb288d94cc6600da47f2d\": rpc error: code = NotFound desc = could not find container \"95f76f669fdc4f4397ea034bc58d0d6c6368ea07265cb288d94cc6600da47f2d\": container with ID starting with 95f76f669fdc4f4397ea034bc58d0d6c6368ea07265cb288d94cc6600da47f2d not found: ID does not exist" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.187417 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9526d09e-786a-4d86-a688-e4afe9b32bfe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9526d09e-786a-4d86-a688-e4afe9b32bfe" (UID: "9526d09e-786a-4d86-a688-e4afe9b32bfe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.216297 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9526d09e-786a-4d86-a688-e4afe9b32bfe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.245076 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-dbqxw_f6ea4a71-2a4d-48cd-9dda-ba453a1c8766/frr-k8s-webhook-server/0.log" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.266719 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x2rfb"] Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.280586 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x2rfb"] Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.290472 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/reloader/0.log" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.693567 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-76d7f7cd57-dncnc_5ec28393-ea76-4413-a903-612126368291/manager/0.log" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.809938 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/frr/0.log" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.872663 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5b857c8d44-88x9l_d90f33c9-1c81-4b74-a905-71aed9ecf222/webhook-server/0.log" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.930795 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-mlsql_72e88a76-8c59-4d07-813e-d7d505d14c3b/kube-rbac-proxy/0.log" Feb 03 11:18:48 crc kubenswrapper[5010]: I0203 11:18:48.465926 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-mlsql_72e88a76-8c59-4d07-813e-d7d505d14c3b/speaker/0.log" Feb 03 11:18:48 crc kubenswrapper[5010]: I0203 11:18:48.534920 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9526d09e-786a-4d86-a688-e4afe9b32bfe" path="/var/lib/kubelet/pods/9526d09e-786a-4d86-a688-e4afe9b32bfe/volumes" Feb 03 11:18:59 crc kubenswrapper[5010]: I0203 11:18:59.502719 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:18:59 crc kubenswrapper[5010]: E0203 11:18:59.503581 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:19:05 crc kubenswrapper[5010]: I0203 11:19:05.743405 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/util/0.log" Feb 03 11:19:05 crc kubenswrapper[5010]: I0203 11:19:05.915517 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/util/0.log" Feb 03 11:19:06 crc kubenswrapper[5010]: I0203 11:19:06.018706 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/pull/0.log" Feb 03 11:19:06 crc kubenswrapper[5010]: I0203 11:19:06.052566 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/pull/0.log" Feb 03 11:19:06 crc kubenswrapper[5010]: I0203 11:19:06.269421 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/util/0.log" Feb 03 11:19:06 crc kubenswrapper[5010]: I0203 11:19:06.270650 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/pull/0.log" Feb 03 11:19:06 crc kubenswrapper[5010]: I0203 11:19:06.312350 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/extract/0.log" Feb 03 11:19:06 crc kubenswrapper[5010]: I0203 11:19:06.511449 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/util/0.log" Feb 03 11:19:06 crc kubenswrapper[5010]: I0203 11:19:06.716021 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/util/0.log" Feb 03 11:19:06 crc kubenswrapper[5010]: I0203 11:19:06.716863 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/pull/0.log" Feb 03 11:19:06 crc kubenswrapper[5010]: I0203 11:19:06.772605 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/pull/0.log" Feb 03 11:19:06 crc kubenswrapper[5010]: I0203 11:19:06.938030 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/pull/0.log" Feb 03 11:19:06 crc kubenswrapper[5010]: I0203 11:19:06.992330 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/util/0.log" Feb 03 11:19:07 crc kubenswrapper[5010]: I0203 11:19:07.022834 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/extract/0.log" Feb 03 11:19:07 crc kubenswrapper[5010]: I0203 11:19:07.169691 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/extract-utilities/0.log" Feb 03 11:19:07 crc kubenswrapper[5010]: I0203 11:19:07.420587 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/extract-content/0.log" Feb 03 11:19:07 crc kubenswrapper[5010]: I0203 11:19:07.427506 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/extract-utilities/0.log" Feb 03 11:19:07 crc kubenswrapper[5010]: I0203 11:19:07.668889 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/extract-content/0.log" Feb 03 11:19:07 crc kubenswrapper[5010]: I0203 11:19:07.889470 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/extract-utilities/0.log" Feb 03 11:19:07 crc kubenswrapper[5010]: I0203 11:19:07.935016 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/extract-content/0.log" Feb 03 11:19:08 crc kubenswrapper[5010]: I0203 11:19:08.735253 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/registry-server/0.log" Feb 03 11:19:08 crc kubenswrapper[5010]: I0203 11:19:08.838138 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/extract-utilities/0.log" Feb 03 11:19:08 crc kubenswrapper[5010]: I0203 11:19:08.990632 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/extract-utilities/0.log" Feb 03 11:19:09 crc kubenswrapper[5010]: I0203 11:19:09.035540 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/extract-content/0.log" Feb 03 11:19:09 crc kubenswrapper[5010]: I0203 11:19:09.066482 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/extract-content/0.log" Feb 03 11:19:09 crc kubenswrapper[5010]: I0203 11:19:09.229162 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/extract-utilities/0.log" Feb 03 11:19:09 crc kubenswrapper[5010]: I0203 11:19:09.254344 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/extract-content/0.log" Feb 03 11:19:09 crc kubenswrapper[5010]: I0203 11:19:09.527054 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-lskbc_a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe/marketplace-operator/0.log" Feb 03 11:19:09 crc kubenswrapper[5010]: I0203 11:19:09.709401 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/extract-utilities/0.log" Feb 03 11:19:09 crc kubenswrapper[5010]: I0203 11:19:09.912990 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/extract-content/0.log" Feb 03 11:19:09 crc kubenswrapper[5010]: I0203 11:19:09.927613 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/registry-server/0.log" Feb 03 11:19:09 crc kubenswrapper[5010]: I0203 11:19:09.971006 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/extract-content/0.log" Feb 03 11:19:09 crc kubenswrapper[5010]: I0203 11:19:09.971137 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/extract-utilities/0.log" Feb 03 11:19:10 crc kubenswrapper[5010]: I0203 11:19:10.518565 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:19:10 crc kubenswrapper[5010]: E0203 11:19:10.519017 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:19:10 crc kubenswrapper[5010]: I0203 11:19:10.743442 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/extract-content/0.log" Feb 03 11:19:10 crc kubenswrapper[5010]: I0203 11:19:10.812397 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/extract-utilities/0.log" Feb 03 11:19:10 crc kubenswrapper[5010]: I0203 11:19:10.862463 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/extract-utilities/0.log" Feb 03 11:19:10 crc kubenswrapper[5010]: I0203 11:19:10.886376 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/registry-server/0.log" Feb 03 11:19:11 crc kubenswrapper[5010]: I0203 11:19:11.043314 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/extract-content/0.log" Feb 03 11:19:11 crc kubenswrapper[5010]: I0203 11:19:11.050064 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/extract-content/0.log" Feb 03 11:19:11 crc kubenswrapper[5010]: I0203 11:19:11.062536 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/extract-utilities/0.log" Feb 03 11:19:11 crc kubenswrapper[5010]: I0203 11:19:11.315303 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/extract-content/0.log" Feb 03 11:19:11 crc kubenswrapper[5010]: I0203 11:19:11.321828 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/extract-utilities/0.log" Feb 03 11:19:11 crc kubenswrapper[5010]: I0203 11:19:11.935339 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/registry-server/0.log" Feb 03 11:19:25 crc kubenswrapper[5010]: I0203 11:19:25.505595 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:19:26 crc kubenswrapper[5010]: I0203 11:19:26.629492 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"498da426eb755a9dc8fd80e2d0fdf6de3005068e582a4256ebdaa141ac61bf48"} Feb 03 11:21:22 crc kubenswrapper[5010]: I0203 11:21:22.736309 5010 generic.go:334] "Generic (PLEG): container finished" podID="9734985d-a674-4c92-b03c-7ca708780de2" containerID="1bb6ed59c0b4992b1aaa8c727fe9862558803252bbff9dc2431ce922cbca729c" exitCode=0 Feb 03 11:21:22 crc kubenswrapper[5010]: I0203 11:21:22.736410 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mcw6z/must-gather-xf96m" event={"ID":"9734985d-a674-4c92-b03c-7ca708780de2","Type":"ContainerDied","Data":"1bb6ed59c0b4992b1aaa8c727fe9862558803252bbff9dc2431ce922cbca729c"} Feb 03 11:21:22 crc kubenswrapper[5010]: I0203 11:21:22.738909 5010 scope.go:117] "RemoveContainer" containerID="1bb6ed59c0b4992b1aaa8c727fe9862558803252bbff9dc2431ce922cbca729c" Feb 03 11:21:23 crc kubenswrapper[5010]: I0203 11:21:23.152204 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mcw6z_must-gather-xf96m_9734985d-a674-4c92-b03c-7ca708780de2/gather/0.log" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.230709 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vl77p"] Feb 03 11:21:30 crc kubenswrapper[5010]: E0203 11:21:30.232099 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9526d09e-786a-4d86-a688-e4afe9b32bfe" containerName="registry-server" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.232117 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9526d09e-786a-4d86-a688-e4afe9b32bfe" containerName="registry-server" Feb 03 11:21:30 crc kubenswrapper[5010]: E0203 11:21:30.232137 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9526d09e-786a-4d86-a688-e4afe9b32bfe" containerName="extract-utilities" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.232144 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9526d09e-786a-4d86-a688-e4afe9b32bfe" containerName="extract-utilities" Feb 03 11:21:30 crc kubenswrapper[5010]: E0203 11:21:30.232163 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9526d09e-786a-4d86-a688-e4afe9b32bfe" containerName="extract-content" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.232169 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9526d09e-786a-4d86-a688-e4afe9b32bfe" containerName="extract-content" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.232440 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="9526d09e-786a-4d86-a688-e4afe9b32bfe" containerName="registry-server" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.234305 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.251039 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vl77p"] Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.299125 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c52790b-6f85-4186-8264-58e7e9cecb86-utilities\") pod \"redhat-operators-vl77p\" (UID: \"1c52790b-6f85-4186-8264-58e7e9cecb86\") " pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.299349 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c52790b-6f85-4186-8264-58e7e9cecb86-catalog-content\") pod \"redhat-operators-vl77p\" (UID: \"1c52790b-6f85-4186-8264-58e7e9cecb86\") " pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.299649 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c4wr\" (UniqueName: \"kubernetes.io/projected/1c52790b-6f85-4186-8264-58e7e9cecb86-kube-api-access-7c4wr\") pod \"redhat-operators-vl77p\" (UID: \"1c52790b-6f85-4186-8264-58e7e9cecb86\") " pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.401327 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c52790b-6f85-4186-8264-58e7e9cecb86-catalog-content\") pod \"redhat-operators-vl77p\" (UID: \"1c52790b-6f85-4186-8264-58e7e9cecb86\") " pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.401440 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c4wr\" (UniqueName: \"kubernetes.io/projected/1c52790b-6f85-4186-8264-58e7e9cecb86-kube-api-access-7c4wr\") pod \"redhat-operators-vl77p\" (UID: \"1c52790b-6f85-4186-8264-58e7e9cecb86\") " pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.401562 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c52790b-6f85-4186-8264-58e7e9cecb86-utilities\") pod \"redhat-operators-vl77p\" (UID: \"1c52790b-6f85-4186-8264-58e7e9cecb86\") " pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.402187 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c52790b-6f85-4186-8264-58e7e9cecb86-utilities\") pod \"redhat-operators-vl77p\" (UID: \"1c52790b-6f85-4186-8264-58e7e9cecb86\") " pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.402176 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c52790b-6f85-4186-8264-58e7e9cecb86-catalog-content\") pod \"redhat-operators-vl77p\" (UID: \"1c52790b-6f85-4186-8264-58e7e9cecb86\") " pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.428462 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c4wr\" (UniqueName: \"kubernetes.io/projected/1c52790b-6f85-4186-8264-58e7e9cecb86-kube-api-access-7c4wr\") pod \"redhat-operators-vl77p\" (UID: \"1c52790b-6f85-4186-8264-58e7e9cecb86\") " pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.558895 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:31 crc kubenswrapper[5010]: I0203 11:21:31.117640 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vl77p"] Feb 03 11:21:31 crc kubenswrapper[5010]: I0203 11:21:31.841058 5010 generic.go:334] "Generic (PLEG): container finished" podID="1c52790b-6f85-4186-8264-58e7e9cecb86" containerID="fed246cf15bd9f897eb00ee6c7dd755f4bdf771a34fb20fa112191cbcb22d915" exitCode=0 Feb 03 11:21:31 crc kubenswrapper[5010]: I0203 11:21:31.841117 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vl77p" event={"ID":"1c52790b-6f85-4186-8264-58e7e9cecb86","Type":"ContainerDied","Data":"fed246cf15bd9f897eb00ee6c7dd755f4bdf771a34fb20fa112191cbcb22d915"} Feb 03 11:21:31 crc kubenswrapper[5010]: I0203 11:21:31.841598 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vl77p" event={"ID":"1c52790b-6f85-4186-8264-58e7e9cecb86","Type":"ContainerStarted","Data":"22b60b6c17f25c7c63dfd793804cd8900a0973bf01477d86497ce7e668e61f5d"} Feb 03 11:21:33 crc kubenswrapper[5010]: I0203 11:21:33.863507 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vl77p" event={"ID":"1c52790b-6f85-4186-8264-58e7e9cecb86","Type":"ContainerStarted","Data":"a8e93f33da85b2b82e2dd1fdd9480472833652a9ae679af53214e2d67b135296"} Feb 03 11:21:34 crc kubenswrapper[5010]: I0203 11:21:34.528593 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mcw6z/must-gather-xf96m"] Feb 03 11:21:34 crc kubenswrapper[5010]: I0203 11:21:34.529364 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-mcw6z/must-gather-xf96m" podUID="9734985d-a674-4c92-b03c-7ca708780de2" containerName="copy" containerID="cri-o://10474f5f43472032315addbe669cd60be39554b99965e76916b96cb1a8a1f7cb" gracePeriod=2 Feb 03 11:21:34 crc kubenswrapper[5010]: I0203 11:21:34.541794 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mcw6z/must-gather-xf96m"] Feb 03 11:21:34 crc kubenswrapper[5010]: I0203 11:21:34.875105 5010 generic.go:334] "Generic (PLEG): container finished" podID="1c52790b-6f85-4186-8264-58e7e9cecb86" containerID="a8e93f33da85b2b82e2dd1fdd9480472833652a9ae679af53214e2d67b135296" exitCode=0 Feb 03 11:21:34 crc kubenswrapper[5010]: I0203 11:21:34.875688 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vl77p" event={"ID":"1c52790b-6f85-4186-8264-58e7e9cecb86","Type":"ContainerDied","Data":"a8e93f33da85b2b82e2dd1fdd9480472833652a9ae679af53214e2d67b135296"} Feb 03 11:21:34 crc kubenswrapper[5010]: I0203 11:21:34.878782 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mcw6z_must-gather-xf96m_9734985d-a674-4c92-b03c-7ca708780de2/copy/0.log" Feb 03 11:21:34 crc kubenswrapper[5010]: I0203 11:21:34.879725 5010 generic.go:334] "Generic (PLEG): container finished" podID="9734985d-a674-4c92-b03c-7ca708780de2" containerID="10474f5f43472032315addbe669cd60be39554b99965e76916b96cb1a8a1f7cb" exitCode=143 Feb 03 11:21:34 crc kubenswrapper[5010]: I0203 11:21:34.999409 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mcw6z_must-gather-xf96m_9734985d-a674-4c92-b03c-7ca708780de2/copy/0.log" Feb 03 11:21:35 crc kubenswrapper[5010]: I0203 11:21:35.000194 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/must-gather-xf96m" Feb 03 11:21:35 crc kubenswrapper[5010]: I0203 11:21:35.113731 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9734985d-a674-4c92-b03c-7ca708780de2-must-gather-output\") pod \"9734985d-a674-4c92-b03c-7ca708780de2\" (UID: \"9734985d-a674-4c92-b03c-7ca708780de2\") " Feb 03 11:21:35 crc kubenswrapper[5010]: I0203 11:21:35.114143 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lc2c\" (UniqueName: \"kubernetes.io/projected/9734985d-a674-4c92-b03c-7ca708780de2-kube-api-access-7lc2c\") pod \"9734985d-a674-4c92-b03c-7ca708780de2\" (UID: \"9734985d-a674-4c92-b03c-7ca708780de2\") " Feb 03 11:21:35 crc kubenswrapper[5010]: I0203 11:21:35.124551 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9734985d-a674-4c92-b03c-7ca708780de2-kube-api-access-7lc2c" (OuterVolumeSpecName: "kube-api-access-7lc2c") pod "9734985d-a674-4c92-b03c-7ca708780de2" (UID: "9734985d-a674-4c92-b03c-7ca708780de2"). InnerVolumeSpecName "kube-api-access-7lc2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:21:35 crc kubenswrapper[5010]: I0203 11:21:35.218491 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lc2c\" (UniqueName: \"kubernetes.io/projected/9734985d-a674-4c92-b03c-7ca708780de2-kube-api-access-7lc2c\") on node \"crc\" DevicePath \"\"" Feb 03 11:21:35 crc kubenswrapper[5010]: I0203 11:21:35.356082 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9734985d-a674-4c92-b03c-7ca708780de2-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "9734985d-a674-4c92-b03c-7ca708780de2" (UID: "9734985d-a674-4c92-b03c-7ca708780de2"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:21:35 crc kubenswrapper[5010]: I0203 11:21:35.424304 5010 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9734985d-a674-4c92-b03c-7ca708780de2-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 03 11:21:35 crc kubenswrapper[5010]: I0203 11:21:35.897773 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mcw6z_must-gather-xf96m_9734985d-a674-4c92-b03c-7ca708780de2/copy/0.log" Feb 03 11:21:35 crc kubenswrapper[5010]: I0203 11:21:35.898633 5010 scope.go:117] "RemoveContainer" containerID="10474f5f43472032315addbe669cd60be39554b99965e76916b96cb1a8a1f7cb" Feb 03 11:21:35 crc kubenswrapper[5010]: I0203 11:21:35.898842 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/must-gather-xf96m" Feb 03 11:21:35 crc kubenswrapper[5010]: I0203 11:21:35.954541 5010 scope.go:117] "RemoveContainer" containerID="1bb6ed59c0b4992b1aaa8c727fe9862558803252bbff9dc2431ce922cbca729c" Feb 03 11:21:36 crc kubenswrapper[5010]: I0203 11:21:36.535878 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9734985d-a674-4c92-b03c-7ca708780de2" path="/var/lib/kubelet/pods/9734985d-a674-4c92-b03c-7ca708780de2/volumes" Feb 03 11:21:36 crc kubenswrapper[5010]: I0203 11:21:36.947202 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vl77p" event={"ID":"1c52790b-6f85-4186-8264-58e7e9cecb86","Type":"ContainerStarted","Data":"1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367"} Feb 03 11:21:40 crc kubenswrapper[5010]: I0203 11:21:40.559299 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:40 crc kubenswrapper[5010]: I0203 11:21:40.559947 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:41 crc kubenswrapper[5010]: I0203 11:21:41.608648 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vl77p" podUID="1c52790b-6f85-4186-8264-58e7e9cecb86" containerName="registry-server" probeResult="failure" output=< Feb 03 11:21:41 crc kubenswrapper[5010]: timeout: failed to connect service ":50051" within 1s Feb 03 11:21:41 crc kubenswrapper[5010]: > Feb 03 11:21:46 crc kubenswrapper[5010]: I0203 11:21:46.390019 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:21:46 crc kubenswrapper[5010]: I0203 11:21:46.390628 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 11:21:50 crc kubenswrapper[5010]: I0203 11:21:50.614250 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:50 crc kubenswrapper[5010]: I0203 11:21:50.637741 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vl77p" podStartSLOduration=16.798391357 podStartE2EDuration="20.637718272s" podCreationTimestamp="2026-02-03 11:21:30 +0000 UTC" firstStartedPulling="2026-02-03 11:21:31.843394639 +0000 UTC m=+4761.999370768" lastFinishedPulling="2026-02-03 11:21:35.682721554 +0000 UTC m=+4765.838697683" observedRunningTime="2026-02-03 11:21:36.995270305 +0000 UTC m=+4767.151246444" watchObservedRunningTime="2026-02-03 11:21:50.637718272 +0000 UTC m=+4780.793694391" Feb 03 11:21:50 crc kubenswrapper[5010]: I0203 11:21:50.669764 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:50 crc kubenswrapper[5010]: I0203 11:21:50.867166 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vl77p"] Feb 03 11:21:52 crc kubenswrapper[5010]: I0203 11:21:52.520290 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vl77p" podUID="1c52790b-6f85-4186-8264-58e7e9cecb86" containerName="registry-server" containerID="cri-o://1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367" gracePeriod=2 Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.165640 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.180734 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c52790b-6f85-4186-8264-58e7e9cecb86-catalog-content\") pod \"1c52790b-6f85-4186-8264-58e7e9cecb86\" (UID: \"1c52790b-6f85-4186-8264-58e7e9cecb86\") " Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.180878 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c52790b-6f85-4186-8264-58e7e9cecb86-utilities\") pod \"1c52790b-6f85-4186-8264-58e7e9cecb86\" (UID: \"1c52790b-6f85-4186-8264-58e7e9cecb86\") " Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.180957 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4wr\" (UniqueName: \"kubernetes.io/projected/1c52790b-6f85-4186-8264-58e7e9cecb86-kube-api-access-7c4wr\") pod \"1c52790b-6f85-4186-8264-58e7e9cecb86\" (UID: \"1c52790b-6f85-4186-8264-58e7e9cecb86\") " Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.181958 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c52790b-6f85-4186-8264-58e7e9cecb86-utilities" (OuterVolumeSpecName: "utilities") pod "1c52790b-6f85-4186-8264-58e7e9cecb86" (UID: "1c52790b-6f85-4186-8264-58e7e9cecb86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.200319 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c52790b-6f85-4186-8264-58e7e9cecb86-kube-api-access-7c4wr" (OuterVolumeSpecName: "kube-api-access-7c4wr") pod "1c52790b-6f85-4186-8264-58e7e9cecb86" (UID: "1c52790b-6f85-4186-8264-58e7e9cecb86"). InnerVolumeSpecName "kube-api-access-7c4wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.283012 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c52790b-6f85-4186-8264-58e7e9cecb86-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.283059 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4wr\" (UniqueName: \"kubernetes.io/projected/1c52790b-6f85-4186-8264-58e7e9cecb86-kube-api-access-7c4wr\") on node \"crc\" DevicePath \"\"" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.348693 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c52790b-6f85-4186-8264-58e7e9cecb86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c52790b-6f85-4186-8264-58e7e9cecb86" (UID: "1c52790b-6f85-4186-8264-58e7e9cecb86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.385379 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c52790b-6f85-4186-8264-58e7e9cecb86-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.535966 5010 generic.go:334] "Generic (PLEG): container finished" podID="1c52790b-6f85-4186-8264-58e7e9cecb86" containerID="1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367" exitCode=0 Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.536037 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vl77p" event={"ID":"1c52790b-6f85-4186-8264-58e7e9cecb86","Type":"ContainerDied","Data":"1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367"} Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.536115 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vl77p" event={"ID":"1c52790b-6f85-4186-8264-58e7e9cecb86","Type":"ContainerDied","Data":"22b60b6c17f25c7c63dfd793804cd8900a0973bf01477d86497ce7e668e61f5d"} Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.536128 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.536167 5010 scope.go:117] "RemoveContainer" containerID="1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.563162 5010 scope.go:117] "RemoveContainer" containerID="a8e93f33da85b2b82e2dd1fdd9480472833652a9ae679af53214e2d67b135296" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.589512 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vl77p"] Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.599366 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vl77p"] Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.611650 5010 scope.go:117] "RemoveContainer" containerID="fed246cf15bd9f897eb00ee6c7dd755f4bdf771a34fb20fa112191cbcb22d915" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.650887 5010 scope.go:117] "RemoveContainer" containerID="1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367" Feb 03 11:21:53 crc kubenswrapper[5010]: E0203 11:21:53.651391 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367\": container with ID starting with 1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367 not found: ID does not exist" containerID="1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.651430 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367"} err="failed to get container status \"1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367\": rpc error: code = NotFound desc = could not find container \"1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367\": container with ID starting with 1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367 not found: ID does not exist" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.651456 5010 scope.go:117] "RemoveContainer" containerID="a8e93f33da85b2b82e2dd1fdd9480472833652a9ae679af53214e2d67b135296" Feb 03 11:21:53 crc kubenswrapper[5010]: E0203 11:21:53.651709 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8e93f33da85b2b82e2dd1fdd9480472833652a9ae679af53214e2d67b135296\": container with ID starting with a8e93f33da85b2b82e2dd1fdd9480472833652a9ae679af53214e2d67b135296 not found: ID does not exist" containerID="a8e93f33da85b2b82e2dd1fdd9480472833652a9ae679af53214e2d67b135296" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.651735 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8e93f33da85b2b82e2dd1fdd9480472833652a9ae679af53214e2d67b135296"} err="failed to get container status \"a8e93f33da85b2b82e2dd1fdd9480472833652a9ae679af53214e2d67b135296\": rpc error: code = NotFound desc = could not find container \"a8e93f33da85b2b82e2dd1fdd9480472833652a9ae679af53214e2d67b135296\": container with ID starting with a8e93f33da85b2b82e2dd1fdd9480472833652a9ae679af53214e2d67b135296 not found: ID does not exist" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.651766 5010 scope.go:117] "RemoveContainer" containerID="fed246cf15bd9f897eb00ee6c7dd755f4bdf771a34fb20fa112191cbcb22d915" Feb 03 11:21:53 crc kubenswrapper[5010]: E0203 11:21:53.652068 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fed246cf15bd9f897eb00ee6c7dd755f4bdf771a34fb20fa112191cbcb22d915\": container with ID starting with fed246cf15bd9f897eb00ee6c7dd755f4bdf771a34fb20fa112191cbcb22d915 not found: ID does not exist" containerID="fed246cf15bd9f897eb00ee6c7dd755f4bdf771a34fb20fa112191cbcb22d915" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.652102 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fed246cf15bd9f897eb00ee6c7dd755f4bdf771a34fb20fa112191cbcb22d915"} err="failed to get container status \"fed246cf15bd9f897eb00ee6c7dd755f4bdf771a34fb20fa112191cbcb22d915\": rpc error: code = NotFound desc = could not find container \"fed246cf15bd9f897eb00ee6c7dd755f4bdf771a34fb20fa112191cbcb22d915\": container with ID starting with fed246cf15bd9f897eb00ee6c7dd755f4bdf771a34fb20fa112191cbcb22d915 not found: ID does not exist" Feb 03 11:21:54 crc kubenswrapper[5010]: I0203 11:21:54.526210 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c52790b-6f85-4186-8264-58e7e9cecb86" path="/var/lib/kubelet/pods/1c52790b-6f85-4186-8264-58e7e9cecb86/volumes" Feb 03 11:22:16 crc kubenswrapper[5010]: I0203 11:22:16.390698 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:22:16 crc kubenswrapper[5010]: I0203 11:22:16.391661 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 11:22:46 crc kubenswrapper[5010]: I0203 11:22:46.390443 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:22:46 crc kubenswrapper[5010]: I0203 11:22:46.390967 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 11:22:46 crc kubenswrapper[5010]: I0203 11:22:46.391033 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 11:22:46 crc kubenswrapper[5010]: I0203 11:22:46.392044 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"498da426eb755a9dc8fd80e2d0fdf6de3005068e582a4256ebdaa141ac61bf48"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 11:22:46 crc kubenswrapper[5010]: I0203 11:22:46.392121 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://498da426eb755a9dc8fd80e2d0fdf6de3005068e582a4256ebdaa141ac61bf48" gracePeriod=600 Feb 03 11:22:47 crc kubenswrapper[5010]: I0203 11:22:47.131715 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="498da426eb755a9dc8fd80e2d0fdf6de3005068e582a4256ebdaa141ac61bf48" exitCode=0 Feb 03 11:22:47 crc kubenswrapper[5010]: I0203 11:22:47.131832 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"498da426eb755a9dc8fd80e2d0fdf6de3005068e582a4256ebdaa141ac61bf48"} Feb 03 11:22:47 crc kubenswrapper[5010]: I0203 11:22:47.133037 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"204b51e4d5b74a8157191003f28432d43c32c9430018526b50e2bb5e62e1873a"} Feb 03 11:22:47 crc kubenswrapper[5010]: I0203 11:22:47.133071 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:24:46 crc kubenswrapper[5010]: I0203 11:24:46.389982 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:24:46 crc kubenswrapper[5010]: I0203 11:24:46.390627 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515140355420024444 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015140355421017362 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015140343333016504 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015140343333015454 5ustar corecore